Forgot your password?
typodupeerror
User Journal

Journal: Thrifty system for volume billed Internet. -HELP 1

Journal by advid.net

1) The problem:
At countryside, no ADSL, mobile 3G reachable.
No budget for unlimited plan, stay under a few bucks per month for mobile Internet.
No mobility needed.
Low volume Internet need: text emails on gmail, occasional web site visits.

2) The solution:
3G key + external antenna + PC
Prepaid 3G mobile plan, billed on the volume of data downloaded, only 3.4â / month at least.

3) Need help for:
What system to run the PC on ?
It should not download anything implicitly, neither connect to anywhere by itself.
I'm thinking of a tailored Linux with customized web browser.
Offline email with google chrome app, with online sync, seems interesting.

What else ?
Few people have this need, advices and ideas are rare, did I missed something ?

AMD

Journal: Friends and Foes 3

Journal by advid.net
As I've reach the 200 friends limit on slashdot, I decided to keep track of certain friends (confirmed) or to queue new ones in case a friend is disapointing me and frees a slot. This is sad, I hope the 200 limit will be raised...

Confirmed are protected against unfair deletetion in case I really want to bring a new friend in and need to neutralize a random friend. Challenged may be deleted from friend list later.

Confirmed friends
http://slashdot.org/~LWATCDR
http://slashdot.org/~Ironsides
http://slashdot.org/~LionKimbro
http://slashdot.org/~NMerriam
http://slashdot.org/~misanthrope101
http://slashdot.org/~jgoemat
http://slashdot.org/~Grendel+Drago
http://slashdot.org/~Mark_MF-WN

Challenged friends
http://slashdot.org/~Opportunist
http://slashdot.org/~Teancum
http://slashdot.org/~geekoid

Queued friends
http://slashdot.org/~Demon-Xanth
http://slashdot.org/~dpilot
http://slashdot.org/~Sax+Maniac
http://slashdot.org/~Alzheimers
http://slashdot.org/~MaWeiTao
http://slashdot.org/~FireFury03
http://slashdot.org/~bentcd
http://slashdot.org/~fotbr
http://slashdot.org/~skrolle2
http://slashdot.org/~Chris_Mir
http://daleglass.net/
http://slashdot.org/~Maondas
http://slashdot.org/~babbling
http://slashdot.org/~evanbd
http://slashdot.org/~skam240
http://slashdot.org/~innerweb
http://slashdot.org/~tamnir
http://slashdot.org/~Heir+Of+The+Mess -
http://slashdot.org/~jstomel
http://slashdot.org/~etherlad
http://slashdot.org/~MightyYar
http://slashdot.org/~BakaHoushi
http://slashdot.org/~scotch
http://slashdot.org/~erroneus
http://slashdot.org/~tgibbs
http://slashdot.org/~Smidge204
http://slashdot.org/~sqrt(2)
http://slashdot.org/~mha
http://slashdot.org/~Creosote
http://slashdot.org/~Aglassis
http://slashdot.org/~exploder
http://slashdot.org/~SanityInAnarchy
http://slashdot.org/~Hatta
http://slashdot.org/~RobDude
http://slashdot.org/~ChronosWS
http://slashdot.org/~Belial6
http://slashdot.org/~jawtheshark
http://slashdot.org/~eclectic4 ++ (dec2013: last jan2010)
http://slashdot.org/~eldavojohn
http://slashdot.org/~flyingsquid
http://slashdot.org/~MightyMartian (but foe of friend)
http://slashdot.org/~thynk
http://slashdot.org/~Dhrakar
http://slashdot.org/~plnrtrvlr
http://slashdot.org/~unapersson
http://slashdot.org/~Khazunga
http://slashdot.org/~djmurdoch
http://slashdot.org/~Fuzzums
http://slashdot.org/~xstonedogx/
http://slashdot.org/~Magic5Ball/
http://slashdot.org/~Just+Some+Guy
http://slashdot.org/~giminy
http://slashdot.org/~steelfood
http://slashdot.org/~Kent+Recal
http://slashdot.org/~RAMMS+EIN/
http://slashdot.org/~thsths/
http://slashdot.org/~spun
http://slashdot.org/~sm62704 +++ (dec2013: last sept2008)
http://slashdot.org/~Andy+Smith rel +++ (dec2013: last jul2012)
http://slashdot.org/~Ethanol-fueledrel++ (dec2013: last mar2012)

Foe
http://slashdot.org/~mapkinase

investigate
http://slashdot.org/~twitter/

check to eject Minwee (522556)

EDIT Dec2013: it looks like the 200 friends limit have been raised, so I digg some queued friends to friend theml...

Slashback

Journal: Backup software & organization (slashback)

Journal by advid.net
Backing up a Linux (or Other *nix) System

Here are some valuable thoughts, I'll re-write something more synthetic later...

From digitalhermit (113459):

I deal with some aggregate 2 terabytes of storage on my home file servers. What works for me won't work for an enterprise corporate data center, but maybe some things are useful...

I think the article does a good job of explaining how to backup, but maybe just as important is "why?". There are some posts that say put everything on a RAID or use mirror or dd. What they fail to address is one important reason to backup: human error. You may wipe a file and then a week later need to recover it. If all you're doing is mirroring or RAID, no matter how reliable, your backups are worthless.

There's also different classes of data. I have gigabytes of videos. Some are transcoded DVDs, some are raw footage. If I lose all my transcoded DVDs it's not as critical as if I lost raw footage. Why? The DVDs can be re-ripped. It will take a long time but the data can be recreated. For the raw footage it's different, even if I keep the original Mini-DV tapes, because re-recording the video from tape won't guarantee that the file is identical. If the file is different then the edits will be different. Then there's also mail spools, CVS, personal files, etc..

What I've found is that I archive my DVD rips once every few months. Other stuff is backed up once a week to another file server.

I could care less about the OS. THe file server runs FedoraCore5. The only thing I keep is the Kickstart file so that I can rebuild it within a matter of minutes then restore the data from archives. This is just a matter of copying a samba configuration and restarting.

For the web server, all content is kept within CVS. If the web server fails, it's just a matter of rebuilding the image and pulling the latest copy from CVS. Fifteen minutes to re-image the OS. Five minutes to pull down the latest content.

For DNS, initial configuration for 8 domains is done by a perl script that auto-creates the named.conf and all zone files. Then I just append the host list to the primary domain. Ten minutes at most.

Home directories are centralized on a file server using OpenLDAP and automounts. One filesystem to backup makes it easy.. By being easy it means it gets done automatically.

Other "machines" are virtual and these are copied to DVD whenever something drastic changes (e.g., major upgrade).

From swordgeek (112599):

When you work in a large environment, you start to develop a different idea about backups. Strangely enough, most of these ideas work remarkably well on a small scale as well.

tar, gtar, dd, cp, etc. are not backup programs. These are file or filesystem copy programs. Backups are a different kettle of fish entirely.

Amanda is a pretty good option. There are many others. The tool really isn't that important other than that (a) it maintains a catalog, and (b) it provides comprehensive enough scheduling for your needs.

The schedule is key. Deciding what needs to get backed up, when it needs to get backed up, how big of a failure window you can tolerate, and such is the real trick. It can be insanely difficult when you have a hundred machines with different needs, but fundamentally, a few rules apply to backups:

For backups:
1) Back up the OS routinely.
2) Back up the data obsessively.
3) Document your systems carefully.
4) TEST your backups!!!

For restores:
1) Don't restore machines--rebuild.
2) Restore necessary config files.
3) Restore data.
4) TEST your restoration.

All machines should have their basic network and system config documented. If a machine is a web server, that fact should be added to the documentation but the actual web configuration should be restored from OS backups. Build the machine, create the basic configuration, restore the specific configuration, recover the data, verify everything. It's not backups, it's not a tool, it's not just spinning tape; it's the process and the documentation and the testing.

Slashdot.org

Journal: Slashdot survey: more suggestions

Journal by advid.net
This survey about slashdot made me wrote some suggestions...
  • A kind of Slashdotpedia: after an interesting question I gather the best slashdoter advices and my own search to make a journal entry. I suggest that I could back-link my journal from the article, even if archived. Then anybody browsing the article later could see that a guy has made a summary, a best of, enriched with his own work. Also those slashback summaries could be listed in a slashdotpedia section for great value added content (from a nerd point of view). See my three journal entries as an example.
    This mod is an easy one, few resource needed, work done by dedicated /.ers
  • As a registered user I'd like to know which issues I've seen (loaded/clicked) and which articles also. Slashdot would need to keep a database of this, I guess you can't afford such a load. Benefit: I can read later just what I've missed, I don't need to keep track of read/unread items.
  • I like those topic pics :) showing up along with articles, but we should see more often two pictures for an entry that could fit in several sections. I guess those new tags can help also.
  • There's some space on the title bar that could be used for this:
    Display a kind of flag array (very small icons) to tell what is the mood of the discussion. Then we could at a glance see that they started (mainly) to joke / troll, or that they are serious with great feedback. It's a kind of automatic tagging but not for topics/content, only for mood/kind-of-answers/readers-behaviour.
Data Storage

Journal: Ideas for a Home Grown Network Attached Storage (slashback)

Journal by advid.net
Summary for this thread :
Ideas for a Home Grown Network Attached Storage?

[...]I would like to build my own NAS and am interested in hardware/software ideas. While the small form factor PC cases are attractive, my NAS will dwell in the basement so I am thinking of a cheap/roomy ATX case with lots of power.[...]

Notice: this text is a mix of several authors plus personnal updates, I thank everyone. This time I tried to credit authors from /. crowd ;-)

My conclusions are at the end along with request for comments.

from the /. crowd :

General advice

One big growable partition: Take a bunch of disks, turn them into RAID5 array. Make a logical volume (LVM on Linux) and add the RAID-array to it. Create a growable device on the LVM and format with a standard growable FS.
When you get new disks simply create a new RAID5 array and add that to the logical volume and add to your current and grow the FS on it.

You don't want everything on one big RAID0, I lost 200G of data that way. I can say I'll never do that mistake again.

FileSystem type

Common linux file systems (ext, reiser, etc) contains critical data-losing type bugs on file systems bigger than 2TB, except XFS. This was found to be the case in even the most recent 2.6 kernels.
Tony Battersby posted a patch to the LBD mailing list recently to address the ones he could find, but lacking a full audit, you probably shouldn't use any filesystem other than XFS.
Considering the gravity of these bugs, you might consider using XFS for everything, if the developers left these critical bugs in for so long, it makes you wonder about the general quality of the filesystems.

What of IBM's JFS? We run that here on our .75ish TB file server, and it's been great for us. We've not had any data corruption issues since we deployed it ~1yr ago, and it's survived a number of power outages with no problems. I'm impressed so far :)

See Tivos. If they use XFS, it's probably because it deletes even very large files instantaneously whereas most other filesystems takes longer the larger the file is. This is a clear advantage if you want to be able to delete a large movie file from disk at the same time that you want to record TV to that disk.

Summary for this thread :
Ideas for a Home Grown Network Attached Storage?

[...]I would like to build my own NAS and am interested in hardware/software ideas. While the small form factor PC cases are attractive, my NAS will dwell in the basement so I am thinking of a cheap/roomy ATX case with lots of power.[...]

Notice: this text is a mix of several authors plus personnal updates, I thank everyone. This time I tried to credit authors from /. crowd ;-)

General advice

One big growable partition: Take a bunch of disks, turn them into RAID5 array. Make a logical volume (LVM on Linux) and add the RAID-array to it. Create a growable device on the LVM and format with a standard growable FS.
When you get new disks simply create a new RAID5 array and add that to the logical volume and add to your current and grow the FS on it.

You don't want everything on one big RAID0, I lost 200G of data that way. I can say I'll never do that mistake again.

FileSystem type

Common linux file systems (ext, reiser, etc) contains critical data-losing type bugs on file systems bigger than 2TB, except XFS. This was found to be the case in even the most recent 2.6 kernels.
Tony Battersby posted a patch to the LBD mailing list recently to address the ones he could find, but lacking a full audit, you probably shouldn't use any filesystem other than XFS.
Considering the gravity of these bugs, you might consider using XFS for everything, if the developers left these critical bugs in for so long, it makes you wonder about the general quality of the filesystems.

What of IBM's JFS? We run that here on our .75ish TB file server, and it's been great for us. We've not had any data corruption issues since we deployed it ~1yr ago, and it's survived a number of power outages with no problems. I'm impressed so far :)

See Tivos. If they use XFS, it's probably because it deletes even very large files instantaneously whereas most other filesystems takes longer the larger the file is. This is a clear advantage if you want to be able to delete a large movie file from disk at the same time that you want to record TV to that disk.

Struture

There's no reason the NAS box has to have all the files in one file system. Just create multiple partitions or logical volumes. You export directory trees across the network on NAS, not file systems.

Exporting shares

I feel strange advocating a MS-originated protocol -- but the truth us, serving files via Samba on Linux is going to be the best-performing[1], most-compatible remote file system available. [1] Samba beats the MS implementations of SMB/CIFS. No guarantees about Samba vs NFS, GFS, Coda, whatever. Structure

There's no reason the NAS box has to have all the files in one file system. Just create multiple partitions or logical volumes. You export directory trees across the network on NAS, not file systems.

Exporting shares

I feel strange advocating a MS-originated protocol -- but the truth us, serving files via Samba on Linux is going to be the best-performing[1], most-compatible remote file system available. [1] Samba beats the MS implementations of SMB/CIFS. No guarantees about Samba vs NFS, GFS, Coda, whatever.

RAID or not RAID

Regarding RAID, it's been my experience working at The Archive that RAID is often more trouble than it's worth, especially when it comes to data recovery. In theory, recovery is easy, you just replace a bad disk and it will rebuild the missing data, and you're good to go. In practice, though, you will often not notice that one of your disks are borked until two disks or borked (or however many it takes for your RAID system to stop working), and then you have a major pain in the ass on your hands. At least with one filesystem per disk, you can attempt to save the filesystem by dd'ing the entire raw partition contents onto a different physical drive of same make + model, skipping bad sectors, and then running fsck on the good drive. But if you have one whopping huge 2.4TB filesystem, then you can't do that trick without a second 2.4TB device to dd it all onto, and even if you have that, it's probably going to be copied over the network, which makes an already slow process slower .. if you can stomach it, you might just want to make one filesystem per hard drive and NFS (or Samba, or whatever) export each of your six filesystems separately.

On the contrary:

Saying so about RAID is insane.
See mdadm/mdmonitor to get a mail as soon as there is a failure .
Personally I would recommend setting up nagios or some other software monitoring. Everytime something goes wrong on a machine, we write a script to monitor that. Now, very few things go wrong unnoticed.
I'd really much prefer that to not having a RAID array. We've used that system (*knock*,*knock*,*knock*), for 4 years, and with about 5TB of filesystems at work, we've never ever lost a RAID'ed filesystem. We have lost several, incredibly important filesystems that weren't RAID'ed.
If you have spare drives arround, you can configure mdadm to automatically add them into the system. Unlike the standard md tools, you can have one spare for any number of md arrays.

Beware of some misleading "advice": "RAID 5 is about as fast as RAID-0 on reads..." ok but "...the bottleneck on writes is the parity calculation, not access time for the drives." is false:
Even the paltry 366Mhz Celeron in my fileserver can perform parity calculations at nearly 1GB/sec. The bottleneck with RAID5 most certainly *is* the physical disk accesses (assuming any remotely modern hardware)
I would suggest using a motherboard with multiple PCI buses. Basically, look for something that's got two (or more) 64 bit PCI-X slots, as these boards nearly always have multiple PCI buses.
Also putting multiple IDE drives on a channel will destroy performance.
Using RAID50 instead of RAID5 is pointless.
Just buy yourself some four port IDE controllers, put one drive each port and use Linux's software RAID to create two four-disk RAID5 devices (or one 8-disk device if you prefer). Then put LVM over the top to make the space more manageable. If you've got the hardware resources, make sure each disk controller is on its own PCI bus, or at the very least sharing it with something inconsequential (like the USB controller or the video card)

External USB / FireWire enclosures

The state of external enclosures, USB chipsets and firewire chipsets is a sad thing.
I had to go through 3 different USB chipsets (different motherboards) before my external enclosure would write data without random corruption.
Firewire's no better, either. I had an Adaptec firewire card (Texas Instruments chipset, I believe) and it worked with my external drives, yet after 5 or 10 minutes, would randomly drop the drive and corrupt data.

Testimonial

I did this a while back. (3+ years, so it's obviousely not 1TB).
My fileserver runs 24/7 and has been doing that for about 3 years (minus downtime for moving).
I use 4 40GB SCSI drives in RAID 5 configuration, using Linux software RAID.(Obviousely I would have used large IDE now, but these were the cheapest per GB at the time, and I already had the SCSI controller laying around)
This gives me about 136GB of useable space. PArtition is running ext3 as filesystem. The CPU is a Pentium II 450 and it has 256MB of RAM. Is running on a Tyan dual mobo with builtin 10/100 and SCSI.
The server is running an older RedHat release with no GUI, upgraded to Kernel 2.6.8.1.
The RAID is shared on the network using Samba.
Read performance is decent, getting around 5-7MBytes/sec read speed which is pretty good on a 100Mbit link. Write speed is slower, around 3-5MB/s

Misc

When you're dealing with that much storage, you really need to catagorize your files into what needs to be backed up and what doesn't.
If you use Linux, LVM will become your new best friend. Think of: noise and power use, heat and airflow

Don't forget to enable S.M.A.R.T. drive monitoring
I do a lot of software raids and with smartctl, no drive crash has ever surprised me. i always had the time to get a spare disc and replace it on the array before something unfunny happened.
do a smartctl -t short /dev/hda every week and a -t long every month or so ...
read the online page of it: http://smartmontools.sourceforge.net/
Software raid works perfect on linux... and combined with LVM the things gets even better
A number of people also recommended MDADM [freshmeat.net] for building and maintaining software RAID systems on Linux.

This won't be best solution noise-wise, but this would extend the drive lifetime. Cut extra holes to the case and build air-flow tunnel to help cooling the drives. I measured drop from 46C to 25C with 12cm nexus low speed fan.

Some small commercial solutions

Device for Samba sharing a USB drive 100$
Need to add: USB drive or drive + USB adapter, up to 8.

Rebyte 150$
A simple flash Linux distro with a converter board that plugs in to an IDE slot. Supports all the standard raid setups. I recommend investing in cooling for hard drives -- not things you want to have fail on a NAS system.

Credits:

Hast zoeith GigsVT HoneyBunchesOfGoats sarahemm booch richie2000 -dsr- Keruo TTK Ciar ComputerSlicer23 drsmithy delus10n0 tchuladdiass Winter beegle

*** advid.net Conclusion ***

Some small commercial solutions are worth to look at -for the lazy/hurried, but a real DIY setup would be:

ATX PC tower with linux 2.6 distro and the kind of disks you can aford (ATA,SATA,SCSI), low or medium RAM and CPU are enougth, and a 10/100 NIC of course. Better perf with one drive per chanel
#1 Use software RAID5 (raidtools2 or mdadm) and LVM.
#2 One or more logical volume.
#3 A growable filesystem (XFS (lnk2), JFS (lnk2), ext3, ext2
#4 A reliable filesystem (XFS, JFS, stable ext3 ? or good old ext2 ?)
#5 Export shares with Samba.
The box: since NAS could be outside the rooms you live, add extra holes and fans to keep the disks cool. Power: UPS here of course.

Please can you comment about:

#1 Some pointed out that raid could be worse than no RAID but simple copies from one disk to another, I'm thinking of rsyncing locally or even to some other host on LAN when reachable. What do you think ?

#2 Different logical volume for different kind of files (size, lots of writes or mainly reads, backup needs). Thus we would choose the FS and tune it differently on each volume. From backup point of vue it could be simpler and smarter: think of a small FS with your most precious files (your work), it could be handled in a 1st class way (replicas,multi backup everywhere,...). What do you think ?

#3 & #4 Some more feedback for XFS, JFS, ext3, ext2 on kernel 2.6 ?

Edit 1.1:

I think ZFS from Sun is the best FS for this purpose, too bad it can't be on Linux... yet.

Rev 1.1

Hardware Hacking

Journal: Running a Server at Freezing Temperatures (slashback)

Journal by advid.net
Summary of this thread:
Running a Server at Freezing Temperatures
As a part of a backup solution, I'm thinking of running a backup server in my unheated, unattached garage [...] the temperatures very often drop below zero degrees Celsius.

Notice: this text is a mix of several authors plus personnal updates, I thank everyone

Hard drive

Lubricants become more viscous at low temps:
if it got really cold, the lubricants in the drive spindle could actually become solid, freezing the bearings and burning out the motor.
Tell your PC to never turn off hard disks, never turn off fans. (might freeze if they stop, and not start again).

Thermal expansion of the platters:
Hard drive platters go through a normal amount of expansion because solids expand when heated and contract when cooled. Drive controllers are designed to recalibrate occasionally to check for expansion, to insure the heads are positioned correctly, off-track positioning leads to errors. But I seriously doubt the calibration would work outside the range of temps designed into the controller.

Case

Make sure your case is hardened. Every little critter, including mice, will want to live in the warm case. We had a computer in an astronomical observatory dome and mice built their nest on the CPU. The acid in urine from the mice destroyed the motherboard.

Get a case with a thermostatically-controlled main fan (not CPU fan, main fan). Put this in a 5-sided wooden box (hardened against critters, screened on the bottom) and insulate it with construction foam (inside) on four sides and the top. Half-inch foam will probably do. Vent the system fan out the bottom.
What this will do is create a "bubble" of warm air inside the box that is vented when the fan is running and stable when it is off. This will keep your box temperature roughly even. If you are concerned about cold-starting hard disks after a period of off-time, make sure you have a power supply which remains off after a power loss and add a 100 W light bulb inside the box. When you want to power the system back on, switch the bulb on and leave it for an hour or two before you hit the power button, then turn the bulb off again. Do not bring cold hardware into a warm, humid house to warm up - you will get condensation.

Take the floppy out of the machine, and replace the hole in the front with a blank panel. It might be a good idea to do that with the CD/DVD drives as well. Make sure that the back of the case is all sealed up, (ie, no open holes for old PCI devices you no longer have). Lastly, Don't put anything over or close to it. Your going to need it to be able to suck in air, and evacuate the air with the fans.

Misc

Some positive feedback for running such a server at 5C without any problem.
Mind the capacitors min temp specs.

Software

Journal: Software Survival Kit (slashback)

Journal by advid.net
Summary of this thread:
What Would You Put Into A Software Survival Kit?
Plus personal updates

Systems

  • Win98SE
  • Your prefered Linux Distro
  • WinNT4
  • WinXP

Bootable media

  • TOMSRTBT
  • Win98SE Boot Disks
  • Ghost bootable floppies set
  • Knoppix
  • QNX Demo, both modem & lan Disks
  • DOS 6.22 disks
  • any Linux rescue disk/CD

System Tools

  • Partition Magic
  • Norton Ghost
  • The new FDisk for large partitions
  • LapLink, FastLynx http://www.sewelld.com/FastLynx.asp FastLynx has the advantage that it can transfer files between XP and say DOS or Linux, over Serial, Parallel, or USB
  • Delpart.exe
  • McAfee Virus Scan, command line version
  • Antivirus. http://www.free-av.com, F-Prot http://f-prot.com
  • Norton Utilities
  • Hard disk checking utilities (from Maxtor, Seagate, etc) PowerMax http://www.maxtor.com/en/support/downloads/powermax.htm , SeaDiag, HDDiag, WD Lifeguard
  • Memtest86, http://www.memtest86.com/
  • DosDiag - for checking your hardware; http://www.5star-shareware.com/Utilities/Diagnosti cs/bcm-diagnostics.html
  • Favorite tools from Sysinternal/Winternals: FileMon,RegMon,PsTools, ... ERD, NTFS DOS Pro
  • BPR, disk & partition recovery, http://www.data-recovery-software.com/
  • Offline NT Password & Reg Editor, http://home.eunet.no/~pnordahl/ntpasswd/
  • F.I.R.E. (Forensic Incident Response Environment) Linux + security tools, http://fire.dmzs.com/?section=faq
  • ? SpinRite http://grc.com/spinrite.htm
  • ? Lavasoft Adaware

User Tools

  • Undelete. A good and free version can be found at http://home.arcor.de/christian_grau/rescue/
  • ZIP, RAR, ACE. Bzip2 and gzip for DOS or Win32
  • UltraEdit
  • ? XTGold 2.0 or 2.5 - runs on DOS
  • ? XTree Ztree www.ztree.com

Misc Apps & Files

  • Acrobat Reader, Quick Time, Flash/ShockWave
  • Nero Burning Rom
  • misc drivers for personal hardware and common nomad devices
  • MS Service Packs (OS/Office)
  • Standalone Web Browser ( OffByOne, others ... )
  • PowerDVD, PaintShopPro
  • Microsofts free Word and Excel viewers

HardWare related to OS installation

  • Blank Diskettes
  • Screwdrivers
  • Spare screws
  • Spare IDE cable
  • Spare jumpers
  • Spare IDE drive or external drive
  • RJ45 cross over CAT-5 cable
  • Laplink Cable
  • RS 232 cross over serial cable

Links

  • List of rescue disks: http://plug.twuug.org/articles/rescuedisk.html

"Those who will be able to conquer software will be able to conquer the world." -- Tadahiro Sekimoto, president, NEC Corp.

Working...