Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
PHP

Journal Journal: Easy LAMP Install for CentOS/RHEL

http://www.howtoforge.com/quick-n-easy-lamp-server-centos-rhel

Goal

To set up a LAMP server on a fresh VPS/Dedicated server running CentOS 5.0 with atleast 256mb of RAM. We will also be installing Webmin, a free server control panel for linux. If you are using a Debian/Ubuntu refer to this article.

Install Apache

Apache is the most popular Web HTTP server for a Linux servers.

yum install httpd httpd-devel

We might need the httpd-devel libraries to compile and install other modules from the sources, just to be on the safer side. /etc/httpd/conf/httpd.conf - Apache configuration file location. /etc/init.d/httpd start

Install MySQL Database Server

MySQL is a widely used open source database server on most Linux servers and can very well integrate to PHP and Apache server on CentOS/RHEL.

yum install mysql mysql-server mysql-devel

If you attempt to type mysql in command prompt, you will be getting this nasty error.

ERROR 2002 (HY000): Canâ(TM)t connect to local MySQL server through socket â/var/lib/mysql/mysql.sockâ(TM)

This is because you are not running the mysqld daemon before launching the mysql client. The file /var/lib/mysql/mysql.sock will be automatically created upon running the first instance of mysql.

To fix:

First start the mysql daemon, then type mysql: /etc/init.d/mysqld start
mysql

Changing MySQL Root Password

By default the root password is empty for the mysql database. It is a good idea to change the mysql root password to a new one from a security point of view.

mysql> USE mysql;
mysql> UPDATE user SET Password=PASSWORD('newpassword') WHERE user='root';
mysql> FLUSH PRIVILEGES;

Once done, check by logging in:

mysql -u root -p
Enter Password:

To Create A New MySQL User

To create a new mysql user 'guest' with 'all privileges' on the database 'demo':

mysql > create database demo
mysql >GRANT ALL PRIVILEGES ON demo.* TO 'guest'@'localhost' IDENTIFIED BY 'guest' WITH GRANT OPTION;
mysql> UPDATE user SET Password=PASSWORD('guest') WHERE user='guest';

That's it! MySQL is ready! Don't forget to remember the root password as we might be using it with phpmyadmin.

Install PHP5 Scripting Language

Installing PHP5 with the necessary modules is so easy and can be configured for both the Apache and mysql environment.

yum install php php-mysql php-common php-gd php-mbstring php-mcrypt php-devel php-xml

Don't forget to install php-gd (gd library). It is very important if we plan to run captcha scripts on our server and so as other which are dependent on mysql and other functions.

Restart Apache to load php. /etc/init.d/httpd restart

To Test If PHP Is Working Or Not:

Create a file named /var/www/html/test.php with the following phpinfo() function inside php quotes. // test.php

Then point your browser to http://ip.address/test.php.

That's it! You should see a php configuration file displaying all kind of paths and installed modules.

Closely observe the installed configuration on your server.

* PHP Paths (php.ini path)
* Apache paths and Loaded Modules (mod_security, mod_evasive if installed_
* PHP GD Library
* MySQL paths and other information

Install phpMyAdmin

phpMyAdmin is a free web based MySQL database Administration Tool. Without phpMyAdmin it is almost impossible to mysql db operations in the command line. phpMyAdmin has become so convenient and it is absolutely sought by most webmasters to be present along with the mysql server.

yum install phpmyadmin

Point your browser to: http://ip.address/phpmyadmin.

Common Errors

You might encounter the following errors while configuring phpmyadmin.

Forbidden
You don't have permission to access /phpmyadmin/ on this server.

To fix:

Edit the /etc/httpd/conf.d/phpmyadmin.conf and uncomment the line deny from all.

nano /etc/httpd/conf.d/phpmyadmin.conf

Order Deny,Allow
# Deny from all
Allow from 127.0.0.1

Error
The configuration file now needs a secret passphrase (blowfish_secret)

To fix:

nano /usr/share/phpmyadmin/conf.inc.php

Look for a line and enter any password. Just dont leave it empty!

$cfg['blowfish_secret'] = 'mydemopass'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */

It worked for me using the above methods!
Log into the phpmyadmin with the mysql root password we changed while installing the mysql database.

Install Webmin

Webmin a free server hosting control panel for Linux. It is a web based hosting administration tool and can be handy to tweak settings in your server if you are a beginner to Linux! You can download webmin here. Since webmin cannot be installed using yum, we can download an RPM package and install it on our server.

wget
rpm - i webmin-1.410-1.noarch.rpm

That should be a pretty easy installation! Remember webmin uses port 10000 and should not be blocked by your firewall.

Point your browser to: http://ip.address:10000

You should see a webmin login. But we don't know the login and password yet! To set up the webmin password run the script below... /usr/libexec/webmin/changepass.pl /etc/webmin admin

Log in with the admin username and new webmin password!
To uninstall webmin, just run: /etc/webmin/uninstall.sh

Final Steps

We want the Apache and mysql to be loaded at every boot so we switch them on using chkconfig:

chkconfig httpd on
chkconfig mysqld on

You can also place comments in my blog. I would appreciate any feedbacks as well!

This tutorial was written and contributed to HowToForge by Mr.Balakrishnan who currently runs MySQL-Apache-PHP.com. Permission is fully granted to copy/republish this tutorial in any form, provided a source is mentioned with a live link back to the authors site.

User Journal

Journal Journal: ZFS EXPORT IMPORT HOWTO (Migrating ZFS Storage Pools)

Migrating ZFS Storage Pools

Occasionally, you might need to move a storage pool between systems. To do so, the storage devices must be disconnected from the original system and reconnected to the destination system. This task can be accomplished by physically recabling the devices, or by using multiported devices such as the devices on a SAN. ZFS enables you to export the pool from one machine and import it on the destination system, even if the system are of different architectural endianness. For information about replicating or migrating file systems between different storage pools, which might reside on different machines, see Sending and Receiving ZFS Data.

        *

            Preparing for ZFS Storage Pool Migration
        *

            Exporting a ZFS Storage Pool
        *

            Determining Available Storage Pools to Import
        *

            Importing ZFS Storage Pools From Alternate Directories
        *

            Importing ZFS Storage Pools
        *

            Recovering Destroyed ZFS Storage Pools

Preparing for ZFS Storage Pool Migration

Storage pools should be explicitly exported to indicate that they are ready to be migrated. This operation flushes any unwritten data to disk, writes data to the disk indicating that the export was done, and removes all information about the pool from the system.

If you do not explicitly export the pool, but instead remove the disks manually, you can still import the resulting pool on another system. However, you might lose the last few seconds of data transactions, and the pool will appear faulted on the original system because the devices are no longer present. By default, the destination system cannot import a pool that has not been explicitly exported. This condition is necessary to prevent you from accidentally importing an active pool that consists of network-attached storage that is still in use on another system.
Exporting a ZFS Storage Pool

To export a pool, use the zpool export command. For example:

# zpool export tank

The command attempts to unmount any mounted file systems within the pool before continuing. If any of the file systems fail to unmount, you can forcefully unmount them by using the -f option. For example:

# zpool export tank
cannot unmount '/export/home/eschrock': Device busy
# zpool export -f tank

After this command is executed, the pool tank is no longer visible on the system.

If devices are unavailable at the time of export, the devices cannot be identified as cleanly exported. If one of these devices is later attached to a system without any of the working devices, it appears as âoepotentially active.â

If ZFS volumes are in use in the pool, the pool cannot be exported, even with the -f option. To export a pool with a ZFS volume, first ensure that all consumers of the volume are no longer active.

For more information about ZFS volumes, see ZFS Volumes.
Determining Available Storage Pools to Import

After the pool has been removed from the system (either through an explicit export or by forcefully removing the devices), you can attach the devices to the target system. ZFS can handle some situations in which only some of the devices are available, but a successful pool migration depends on the overall health of the devices. In addition, the devices do not necessarily have to be attached under the same device name. ZFS detects any moved or renamed devices, and adjusts the configuration appropriately. To discover available pools, run the zpool import command with no options. For example:

# zpool import
  pool: tank
        id: 11809215114195894163
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

                tank ONLINE
                    mirror-0 ONLINE
                        c1t0d0 ONLINE
                        c1t1d0 ONLINE

In this example, the pool tank is available to be imported on the target system. Each pool is identified by a name as well as a unique numeric identifier. If multiple pools with the same name are available to import, you can use the numeric identifier to distinguish between them.

Similar to the zpool status command output, the zpool import output includes a link to a knowledge article with the most up-to-date information regarding repair procedures for the problem that is preventing a pool from being imported. In this case, the user can force the pool to be imported. However, importing a pool that is currently in use by another system over a storage network can result in data corruption and panics as both systems attempt to write to the same storage. If some devices in the pool are not available but sufficient redundant data exists to provide a usable pool, the pool appears in the DEGRADED state. For example:

# zpool import
    pool: tank
        id: 11809215114195894163
  state: DEGRADED
status: One or more devices are missing from the system.
action: The pool can be imported despite missing or damaged devices. The
                fault tolerance of the pool may be compromised if imported.
      see: http://www.sun.com/msg/ZFS-8000-2Q
config:

                NAME STATE READ WRITE CKSUM
                tank DEGRADED 0 0 0
                    mirror-0 DEGRADED 0 0 0
                        c1t0d0 UNAVAIL 0 0 0 cannot open
                        c1t3d0 ONLINE 0 0 0

In this example, the first disk is damaged or missing, though you can still import the pool because the mirrored data is still accessible. If too many faulted or missing devices are present, the pool cannot be imported. For example:

# zpool import
    pool: dozer
        id: 9784486589352144634
  state: FAULTED
action: The pool cannot be imported. Attach the missing
                devices and try again.
      see: http://www.sun.com/msg/ZFS-8000-6X
config:
                raidz1-0 FAULTED
                    c1t0d0 ONLINE
                    c1t1d0 FAULTED
                    c1t2d0 ONLINE
                    c1t3d0 FAULTED

In this example, two disks are missing from a RAID-Z virtual device, which means that sufficient redundant data is not available to reconstruct the pool. In some cases, not enough devices are present to determine the complete configuration. In this case, ZFS cannot determine what other devices were part of the pool, though ZFS does report as much information as possible about the situation. For example:

# zpool import
pool: dozer
        id: 9784486589352144634
  state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
                devices and try again.
      see: http://www.sun.com/msg/ZFS-8000-6X
config:
                dozer FAULTED missing device
                    raidz1-0 ONLINE
                        c1t0d0 ONLINE
                        c1t1d0 ONLINE
                        c1t2d0 ONLINE
                        c1t3d0 ONLINE
Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.

Importing ZFS Storage Pools From Alternate Directories

By default, the zpool import command only searches devices within the /dev/dsk directory. If devices exist in another directory, or you are using pools backed by files, you must use the -d option to search alternate directories. For example:

# zpool create dozer mirror /file/a /file/b
# zpool export dozer
# zpool import -d /file
    pool: dozer
        id: 7318163511366751416
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

                dozer ONLINE
                    mirror-0 ONLINE /file/a ONLINE /file/b ONLINE
# zpool import -d /file dozer

If devices exist in multiple directories, you can specify multiple -d options.
Importing ZFS Storage Pools

After a pool has been identified for import, you can import it by specifying the name of the pool or its numeric identifier as an argument to the zpool import command. For example:

# zpool import tank

If multiple available pools have the same name, you must specify which pool to import by using the numeric identifier. For example:

# zpool import
    pool: dozer
        id: 2704475622193776801
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

                dozer ONLINE
                    c1t9d0 ONLINE

    pool: dozer
        id: 6223921996155991199
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

                dozer ONLINE
                    c1t8d0 ONLINE
# zpool import dozer
cannot import 'dozer': more than one matching pool
import by numeric ID instead
# zpool import 6223921996155991199

If the pool name conflicts with an existing pool name, you can import the pool under a different name. For example:

# zpool import dozer zeepool

This command imports the exported pool dozer using the new name zeepool.

If the pool was not cleanly exported, ZFS requires the -f flag to prevent users from accidentally importing a pool that is still in use on another system. For example:

# zpool import dozer
cannot import 'dozer': pool may be in use on another system
use '-f' to import anyway
# zpool import -f dozer

Note â"

Do not attempt to import a pool that is active on one system to another system. ZFS is not a native cluster, distributed, or parallel file system and cannot provide concurrent access from multiple, different hosts.

Pools can also be imported under an alternate root by using the -R option. For more information on alternate root pools, see Using ZFS Alternate Root Pools.
Recovering Destroyed ZFS Storage Pools

You can use the zpool import -D command to recover a storage pool that has been destroyed. For example:

# zpool destroy tank
# zpool import -D
    pool: tank
        id: 5154272182900538157
  state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:

                tank ONLINE
                    mirror-0 ONLINE
                        c1t0d0 ONLINE
                        c1t1d0 ONLINE

In this zpool import output, you can identify the tank pool as the destroyed pool because of the following state information:

state: ONLINE (DESTROYED)

To recover the destroyed pool, run the zpool import -D command again with the pool to be recovered. For example:

# zpool import -D tank
# zpool status tank
    pool: tank
  state: ONLINE
  scrub: none requested
config:

                NAME STATE READ WRITE CKSUM
                tank ONLINE
                    mirror-0 ONLINE
                        c1t0d0 ONLINE
                        c1t1d0 ONLINE

errors: No known data errors

If one of the devices in the destroyed pool is faulted or unavailable, you might be able to recover the destroyed pool anyway by including the -f option. In this scenario, you would import the degraded pool and then attempt to fix the device failure. For example:

# zpool destroy dozer
# zpool import -D
pool: dozer
        id: 13643595538644303788
  state: DEGRADED (DESTROYED)
status: One or more devices could not be opened. Sufficient replicas exist for
                the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
      see: http://www.sun.com/msg/ZFS-8000-2Q
config:

                NAME STATE READ WRITE CKSUM
                dozer DEGRADED 0 0 0
                    raidz2-0 DEGRADED 0 0 0
                        c2t8d0 ONLINE 0 0 0
                        c2t9d0 ONLINE 0 0 0
                        c2t10d0 ONLINE 0 0 0
                        c2t11d0 UNAVAIL 0 35 1 cannot open
                        c2t12d0 ONLINE 0 0 0

errors: No known data errors
# zpool import -Df dozer
# zpool status -x
    pool: dozer
  state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
                the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
      see: http://www.sun.com/msg/ZFS-8000-2Q
  scrub: scrub completed after 0h0m with 0 errors on Thu Jan 21 15:38:48 2010
config:

                NAME STATE READ WRITE CKSUM
                dozer DEGRADED 0 0 0
                    raidz2-0 DEGRADED 0 0 0
                        c2t8d0 ONLINE 0 0 0
                        c2t9d0 ONLINE 0 0 0
                        c2t10d0 ONLINE 0 0 0
                        c2t11d0 UNAVAIL 0 37 0 cannot open
                        c2t12d0 ONLINE 0 0 0

errors: No known data errors
# zpool online dozer c2t11d0
Bringing device c2t11d0 online
# zpool status -x
all pools are healthy

User Journal

Journal Journal: ZFS Basics Tutorial found on Net::Thank the tubes!

zfs tutorial part 1
Learning to use ZFS, Sun's new filesystem.

ZFS is an open source filesystem used in Solaris 10, with growing support from other operating systems. This series of tutorials shows you how to use ZFS with simple hands-on examples that require a minimum of resources.

In this tutorial I hope to give you a brief overview of ZFS and show you how to manage ZFS pools, the foundation of ZFS. In subsequent parts will we look at ZFS filesystems in more depth.

This tutorial was created on 2007-03-07 and last revised on 2008-08-24.
ZFS Tutorial Series

1. Overview of ZFS & ZFS Pool Management
2. ZFS Filesystem Management, Mountpoints and Filesystem Properties

Let your hook be always cast; in the pool where you least expect it, there will be a fish. â" Ovid
Getting Started
You need:

* An operating system with ZFS support:
o Solaris 10 6/06 or later [download]
o OpenSolaris [download]
o Mac OS X 10.5 Leopard (requires ZFS download)
o FreeBSD 7 (untested) [download]
o Linux using FUSE (untested) [download]
* Root privileges (or a role with the appropriate ZFS rights profile)
* Some storage, either:
o 512 MB of disk space on an existing partition
o Four spare disks of the same size

Using Files

To use files on an existing filesystem, create four 128 MB files, eg.:

Code: Select all
        # mkfile 128m /home/ocean/disk1
        # mkfile 128m /home/ocean/disk2
        # mkfile 128m /home/ocean/disk3
        # mkfile 128m /home/ocean/disk4

Code: Select all
        # ls -lh /home/ocean

total 1049152
-rw------T 1 root root 128M Mar 7 19:48 disk1
-rw------T 1 root root 128M Mar 7 19:48 disk2
-rw------T 1 root root 128M Mar 7 19:48 disk3
-rw------T 1 root root 128M Mar 7 19:48 disk4

Using Disks

To use real disks in the tutorial make a note of their names (eg. c2t1d0 or c1d0 under Solaris). You will be destroying all the partition information and data on these disks, so be sure they're not needed.

In the examples I will be using files named disk1, disk2, disk3, and disk4; substitute your disks or files for them as appropriate.
ZFS Overview

The architecture of ZFS has three levels. One or more ZFS filesystems exist in a ZFS pool, which consists of one of more devices* (usually disks). Filesystems within a pool share its resources and are not restricted to a fixed size. Devices may be added to a pool while its still running: eg. to increase the size of a pool. New filesystems can be created within a pool without taking filesystems offline. ZFS supports filesystems snapshots and cloning existing filesystems. ZFS manages all aspects of the storage: volume management software (such as SVM or Veritas) is not needed.

*Technically a virtual device (vdev), see the zpool(1M) man page for more.

ZFS is managed with just two commands:

* zpool - Manages ZFS pools and the devices within them.
* zfs - Manages ZFS filesystems.

If you run either command with no options it gives you a handy options summary.
Pools

All ZFS filesystems live in a pool, so the first step is to create a pool. ZFS pools are administered using the zpool command.

Before creating new pools you should check for existing pools to avoid confusing them with your tutorial pools. You can check what pools exist with zpool list:

Code: Select all
        # zpool list

no pools available

NB. OpenSolaris now uses ZFS, so you will likely have an existing ZFS pool called syspool on this OS.
Single Disk Pool

The simplest pool consist of a single device. Pools are created using zpool create. We can create a single disk pool as follows (you must use the absolute path to the disk file):

Code: Select all
        # zpool create herring /home/ocean/disk1
        # zpool list

NAME SIZE USED AVAIL CAP HEALTH ALTROOT
herring 123M 51.5K 123M 0% ONLINE -

No volume management, configuration, newfs or mounting is required. You now have a working pool complete with mounted ZFS filesystem under /herring (/Volumes/herring on Mac OS X - you can also see it mounted on your Mac desktop). We will learn about adjusting mount points in part 2 of the tutorial.

Create a file in the new filesystem:

Code: Select all
        # mkfile 32m /herring/foo
        # ls -lh /herring/foo

-rw------T 1 root root 32M Mar 7 19:56 /herring/foo

Code: Select all
        # zpool list

NAME SIZE USED AVAIL CAP HEALTH ALTROOT
herring 123M 32.1M 90.9M 26% ONLINE -

The new file is using about a quarter of the pool capacity (indicated by the CAP value). NB. If you run the list command before ZFS has finished writing to the disk you will see lower USED and CAP values than shown above; wait a few moments and try again.

Now destroy your pool with zpool destroy:

Code: Select all
        # zpool destroy herring
        # zpool list

no pools available

On Mac OS X you need to force an unmount of the filesyetem (using umount -f /Volumes/herring) before destroying it as it will be in use by fseventsd.

You will only receive a warning about destroying your pool if it's in use. We'll see in a later tutorial how you can recover a pool you've accidentally destroyed.
Mirrored Pool

A pool composed of a single disk doesn't offer any redundancy. One method of providing redundancy is to use a mirrored pair of disk as a pool:

Code: Select all
        # zpool create trout mirror /home/ocean/disk1 /home/ocean/disk2

Code: Select all
        # zpool list

NAME SIZE USED AVAIL CAP HEALTH ALTROOT
trout 123M 51.5K 123M 0% ONLINE -

To see more detail about the pool use zpool status:

Code: Select all
        # zpool status trout

pool: trout
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
trout ONLINE 0 0 0
mirror ONLINE 0 0 0 /home/ocean/disk1 ONLINE 0 0 0 /home/ocean/disk2 ONLINE 0 0 0

errors: No known data errors

We can see our pool contains one mirror of two disks. Let's create a file and see how USED changes:

Code: Select all
        # mkfile 32m /trout/foo

Code: Select all
        # zpool list

NAME SIZE USED AVAIL CAP HEALTH ALTROOT
trout 123M 32.1M 90.9M 26% ONLINE -

As before about a quarter of the disk has been used; but the data is now stored redundantly over two disks. Let's test it by overwriting the first disk label with random data (if you are using real disks you could physically disable or remove a disk instead):

Code: Select all
        # dd if=/dev/random of=/home/ocean/disk1 bs=512 count=1

ZFS automatically checks for errors when it reads/writes files, but we can force a check with the zfs scrub command.

Code: Select all
        # zpool scrub trout

Code: Select all
        # zpool status

pool: trout
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: scrub completed with 0 errors on Wed Mar 7 20:42:07 2007
config:
NAME STATE READ WRITE CKSUM
trout DEGRADED 0 0 0
mirror DEGRADED 0 0 0 /home/ocean/disk1 UNAVAIL 0 0 0 corrupted data /home/ocean/disk2 ONLINE 0 0 0

errors: No known data errors

The disk we used dd on is showing as UNAVAIL with corrupted data, but no data errors are reported for the pool as a whole, and we can still read and write to the pool:

Code: Select all
        # mkfile 32m /trout/bar
        # ls -l /trout/

total 131112
-rw------T 1 root root 33554432 Mar 7 20:43 bar
-rw------T 1 root root 33554432 Mar 7 20:35 foo

To maintain redundancy we should replace the broken disk with another. If you are using a physical disk you can use the zpool replace command (the zpool man page has details). However, in this file-based example I remove the disk file from the mirror and recreate it.

Devices are detached with zpool detach:

Code: Select all
        # zpool detach trout /home/ocean/disk1

Code: Select all
        # zpool status trout

pool: trout
state: ONLINE
scrub: scrub completed with 0 errors on Wed Mar 7 20:42:07 2007
config:
NAME STATE READ WRITE CKSUM
trout ONLINE 0 0 0 /home/ocean/disk2 ONLINE 0 0 0

errors: No known data errors

Code: Select all
        # rm /home/ocean/disk1
        # mkfile 128m /home/ocean/disk1

To attach another device we specify an existing device in the mirror to attach it to with zpool attach:

Code: Select all
        # zpool attach trout /home/ocean/disk2 /home/ocean/disk1

If you're quick enough, after you attach the new disk you will see a resilver (remirroring) in progress with zpool status.

Code: Select all
        # zpool status trout

pool: trout
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 69.10% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
trout ONLINE 0 0 0
mirror ONLINE 0 0 0 /home/ocean/disk2 ONLINE 0 0 0 /home/ocean/disk1 ONLINE 0 0 0

errors: No known data errors

Once the resilver is complete, the pool is healthy again (you can also use ls to check the files are still there):

Code: Select all
        # zpool status trout

pool: trout
state: ONLINE
scrub: resilver completed with 0 errors on Wed Mar 7 20:58:17 2007
config:
NAME STATE READ WRITE CKSUM
trout ONLINE 0 0 0
mirror ONLINE 0 0 0 /home/ocean/disk2 ONLINE 0 0 0 /home/ocean/disk1 ONLINE 0 0 0

errors: No known data errors

Adding to a Mirrored Pool

You can add disks to a pool without taking it offline. Let's double the size of our trout pool:

Code: Select all
        # zpool list

NAME SIZE USED AVAIL CAP HEALTH ALTROOT
trout 123M 64.5M 58.5M 52% ONLINE -

Code: Select all
        # zpool add trout mirror /home/ocean/disk3 /home/ocean/disk4

Code: Select all
        # zpool list

NAME SIZE USED AVAIL CAP HEALTH ALTROOT
trout 246M 64.5M 181M 26% ONLINE -

This happens almost instantly, and the filesystem within the pool remains available. Looking at the status now shows the pool consists of two mirrors:

Code: Select all
        # zpool status trout

pool: trout
state: ONLINE
scrub: resilver completed with 0 errors on Wed Mar 7 20:58:17 2007
config:
NAME STATE READ WRITE CKSUM
trout ONLINE 0 0 0
mirror ONLINE 0 0 0 /home/ocean/disk2 ONLINE 0 0 0 /home/ocean/disk1 ONLINE 0 0 0
mirror ONLINE 0 0 0 /home/ocean/disk3 ONLINE 0 0 0 /home/ocean/disk4 ONLINE 0 0 0

errors: No known data errors

We can see where the data is currently written in our pool using zpool iostat -v:

Code: Select all
        zpool iostat -v trout

capacity operations bandwidth
pool used avail read write read write
---------------------------- ----- ----- ----- ----- ----- -----
trout 64.5M 181M 0 0 13.7K 278
mirror 64.5M 58.5M 0 0 19.4K 394 /home/ocean/disk2 - - 0 0 20.6K 15.4K /home/ocean/disk1 - - 0 0 0 20.4K
mirror 0 123M 0 0 0 0 /home/ocean/disk3 - - 0 0 0 768 /home/ocean/disk4 - - 0 0 0 768
---------------------------- ----- ----- ----- ----- ----- -----

All the data is currently written on the first mirror pair, and none on the second. This makes sense as the second pair of disks was added after the data was written. If we write some new data to the pool the new mirror will be used:

Code: Select all
        # mkfile 64m /trout/quuxx

Code: Select all
        # zpool iostat -v trout

capacity operations bandwidth
pool used avail read write read write
---------------------------- ----- ----- ----- ----- ----- -----
trout 128M 118M 0 0 13.1K 13.6K
mirror 95.1M 27.9M 0 0 18.3K 9.29K /home/ocean/disk2 - - 0 0 19.8K 21.2K /home/ocean/disk1 - - 0 0 0 28.2K
mirror 33.2M 89.8M 0 0 0 10.4K /home/ocean/disk3 - - 0 0 0 11.1K /home/ocean/disk4 - - 0 0 0 11.1K
---------------------------- ----- ----- ----- ----- ----- -----

Note how a little more of the data has been written to the new mirror than the old: ZFS tries to make best use of all the resources in the pool.

That's it for part 1. In part 2 we will look at managing ZFS filesystems themselves and creating multiple filesystems within a pool. We'll create a new pool for part 2, so feel free to destroy the trout pool.

If you want to learn more about the theory behind ZFS and find reference material have a look at ZFS Administration Guide, OpenSolaris ZFS, ZFS BigAdmin and ZFS Best Practices.

Slashdot Top Deals

Somebody ought to cross ball point pens with coat hangers so that the pens will multiply instead of disappear.

Working...