C H A P T E R  5

Using Virtual Disks With Logical Domains

This chapter describes how to use virtual disks with Logical Domains software.


Introduction to Virtual Disks

A virtual disk contains two components: the virtual disk itself as it appears in a domain guest, and the virtual disk backend, which is where data is stored and where virtual I/O ends up. The virtual disk backend is exported from a service domain by the virtual disk server (vds) driver. The vds driver communicates with the virtual disk client (vdc) driver in the guest domain through the hypervisor using a logical domain channel (LDC). Finally, a virtual disk appears as /dev/[r]dsk/cXdYsZ devices in the guest domain.

FIGURE 5-1   Virtual Disks With Logical Domains




The virtual disk backend can be a physical disk, a physical disk slice, a file, a volume from a volume management framework, such as the Zettabyte File System (ZFS), Solaristrademark Volume Manager (SVM), Veritas Volume Manager (VxVM), or any disk pseudo device accessible from the service domain.


Managing Virtual Disks

This section describes adding a virtual disk to a guest domain, changing virtual disk and timeout options, and removing a virtual disk from a guest domain. See Virtual Disk Backend Options for a description of virtual disk options. See Virtual Disk Timeout for a description of the virtual disk timeout.

procedure icon  To Add a Virtual Disk

  1. Export the virtual disk backend from a service domain by using the following command.


    # ldm add-vdsdev [options={ro,slice,excl}] backend volume_name@service_name
    

  2. Assign the backend to a guest domain by using the following command.


    # ldm add-vdisk [timeout=seconds] disk_name volume_name@service_name ldom
    



    Note - A backend is actually exported from the service domain and assigned to the guest domain when the guest domain (ldom) is bound.



procedure icon  To Export a Virtual Disk Backend Multiple Times

A virtual disk backend can be exported multiple times either through the same or different virtual disk servers. Each exported instance of the virtual disk backend can then be assigned to either the same or different guest domains.

When a virtual disk backend is exported multiple times, it should not be exported with the exclusive (excl) option. Specifying the excl option will only allow exporting the backend once. The backend can be safely exported multiple times as a read-only device with the ro option.



caution icon

Caution - When a virtual disk backend is exported multiple times, applications running on guest domains and using that virtual disk are responsible for coordinating and synchronizing concurrent write access to ensure data coherency.



The following example describes how to add the same virtual disk to two different guest domains through the same virtual disk service.

  1. Export the virtual disk backend two times from a service domain by using the following commands.


    # ldm add-vdsdev [options={ro,slice}] backend volume1@service_name
    # ldm add-vdsdev [options={ro,slice}] backend volume2@service_name
    

    The add-vdsdev subcommand displays the following warning to indicate that the backend is being exported more than once.


    Warning: “backend” is already in use by one or more servers in guest “ldom”
    

  2. Assign the exported backend to each guest domain by using the following commands.

    The disk_name can be different for ldom1 and ldom2.


    # ldm add-vdisk [timeout=seconds] disk_name volume1@service_name ldom1
    # ldm add-vdisk [timeout=seconds] disk_name volume2@service_name ldom2
    

procedure icon  To Change Virtual Disk Options

procedure icon  To Change the Timeout Option

procedure icon  To Remove a Virtual Disk

  1. Remove a virtual disk from a guest domain by using the following command.


    # ldm rm-vdisk disk_name ldom
    

  2. Stop exporting the corresponding backend from the service domain by using the following command.


    # ldm rm-vdsdev volume_name@service_name
    


Virtual Disk Appearance

When a backend is exported as a virtual disk, it can appear in the guest domain either as a full disk or as a single slice disk. The way it appears depends on the type of the backend and on the options used to export it.

Full Disk

When a backend is exported to a domain as a full disk, it appears in that domain as a regular disk with 8 slices (s0 to s7). Such a disk is visible with the format(1M) command. The disk’s partition table can be changed using either the fmthard(1M) or format(1M) command.

A full disk is also visible to the OS installation software and can be selected as a disk onto which the OS can be installed.

Any backend can be exported as a full disk except physical disk slices that can only be exported as single slice disks.

Single Slice Disk

When a backend is exported to a domain as a single slice disk, it appears in that domain as a disk with a single partition (s0). Such a disk is not visible with the format(1M) command, and its partition table cannot be changed.

A single slice disk is not visible from the OS installation software and cannot be selected as a disk device onto which the OS can be installed.

Any backend can be exported as a single slice disk except physical disks that can only be exported as full disks.


Virtual Disk Backend Options

Different options can be specified when exporting a virtual disk backend. These options are indicated in the options= argument of the ldm add-vdsdev command as a comma separated list. The valid options are: ro, slice, and excl.

Read-only (ro) Option

The read-only (ro) option specifies that the backend is to be exported as a read-only device. In that case, the virtual disk assigned to the guest domain can only be accessed for read operations, and any write operation to the virtual disk will fail.

Exclusive (excl) Option

The exclusive (excl) option specifies that the backend in the service domain has to be opened exclusively by the virtual disk server when it is exported as a virtual disk to another domain. When a backend is opened exclusively, it is not accessible by other applications in the service domain. This prevents the applications running in the service domain from inadvertently using a backend that is also being used by a guest domain.



Note - Some drivers do not honor the excl option and will disallow some virtual disk backends from being opened exclusively. The excl option is known to work with physical disks and slices, but the option does not work with files. It may or may not work with pseudo devices, such as disk volumes. If the driver of the backend does not honor the exclusive open, the backend excl option is ignored, and the backend is not opened exclusively.



Because the excl option prevents applications running in the service domain from accessing a backend exported to a guest domain, do not set the excl option in the following situations:

By default, the backend is opened non-exclusively. That way the backend still can be used by applications running in the service domain while it is exported to another domain. Note that this is a new behavior starting with the Solaris 10 5/08 OS release. Before the Solaris 10 5/08 OS release, disk backends were always opened exclusively, and it was not possible to have a backend opened non-exclusively.

Slice (slice) Option

A backend is normally exported either as a full disk or as a single slice disk depending on its type. If the slice option is specified, then the backend is forcibly exported as a single slice disk.

This option is useful when you want to export the raw content of a backend. For example, if you have a ZFS or SVM volume where you have already stored data and you want your guest domain to access this data, then you should export the ZFS or SVM volume using the slice option.

For more information about this option, see "Virtual Disk Backend".


Virtual Disk Backend

The virtual disk backend is the location where data of a virtual disk are stored. The backend can be a disk, a disk slice, a file, or a volume, such as ZFS, SVM, or VxVM. A backend appears in a guest domain either as a full disk or as single slice disk, depending on whether the slice option is set when the backend is exported from the service domain. By default, a virtual disk backend is exported non-exclusively as a readable-writable full disk.

Physical Disk or Disk LUN

A physical disk or logical unit number (LUN) is always exported as a full disk. In that case, virtual disk drivers (vds and vdc) forward I/O from the virtual disk and act as a pass-through to the physical disk or LUN.

A physical disk or LUN is exported from a service domain by exporting the device corresponding to the slice 2 (s2) of that disk without setting the slice option. If you export the slice 2 of a disk with the slice option, only this slice is exported and not the entire disk.

procedure icon  To Export a Physical Disk as a Virtual Disk

  1. For example, to export the physical disk clt48d0 as a virtual disk, you must export slice 2 of that disk (clt48d0s2) from the service domain as follows.


    service# ldm add-vdsdev /dev/dsk/c1t48d0s2 c1t48d0@primary-vds0
    

  2. From the service domain, assign the disk (pdisk) to guest domain ldg1, for example.


    service# ldm add-vdisk pdisk c1t48d0@primary-vds0 ldg1
    

  3. After the guest domain is started and running the Solaris OS, you can list the disk (c0d1, for example) and see that the disk is accessible and is a full disk; that is, a regular disk with 8 slices.


    ldg1# ls -1 /dev/dsk/c0d1s*
    /dev/dsk/c0d1s0
    /dev/dsk/c0d1s1
    /dev/dsk/c0d1s2
    /dev/dsk/c0d1s3
    /dev/dsk/c0d1s4
    /dev/dsk/c0d1s5
    /dev/dsk/c0d1s6
    /dev/dsk/c0d1s7
    

Physical Disk Slice

A physical disk slice is always exported as a single slice disk. In that case, virtual disk drivers (vds and vdc) forward I/O from the virtual disk and act as a pass-through to the physical disk slice.

A physical disk slice is exported from a service domain by exporting the corresponding slice device. If the device is different from slice 2 then it is automatically exported as a single slice disk whether or not you specify the slice option. If the device is the slice 2 of the disk, you must set the slice option to export only slice 2 as a single slice disk; otherwise, the entire disk is exported as full disk.

procedure icon  To Export a Physical Disk Slice as a Virtual Disk

  1. For example, to export slice 0 of the physical disk c1t57d0 as a virtual disk, you must export the device corresponding to that slice (c1t57d0s0) from the service domain as follows.


    service# ldm add-vdsdev /dev/dsk/c1t57d0s0 c1t57d0s0@primary-vds0
    

    You do not need to specify the slice option, because a slice is always exported as a single slice disk.

  2. From the service domain, assign the disk (pslice) to guest domain ldg1, for example.


    service# ldm add-vdisk pslice c1t57d0s0@primary-vds0 ldg1
    

  3. After the guest domain is started and running the Solaris OS, you can list the disk (c0d1, for example) and see that the disk is accessible and is a single slice disk (s0).


    ldg1# ls -1 /dev/dsk/c0d13s*
    /dev/dsk/c0d13s0
    

procedure icon  To Export Slice 2

File and Volume

A file or volume (for example from ZFS or SVM) is exported either as a full disk or as single slice disk depending on whether or not the slice option is set.

File or Volume Exported as a Full Disk

If you do not set the slice option, a file or volume is exported as a full disk. In that case, virtual disk drivers (vds and vdc) forward I/O from the virtual disk and manage the partitioning of the virtual disk. The file or volume eventually becomes a disk image containing data from all slices of the virtual disk and the metadata used to manage the partitioning and disk structure.

When a blank file or volume is exported as full disk, it appears in the guest domain as an unformatted disk; that is, a disk with no partition. Then you need to run the format(1M) command in the guest domain to define usable partitions and to write a valid disk label. Any I/O to the virtual disk fails while the disk is unformatted.



Note - Before the Solaris 10 5/08 OS release, when a blank file was exported as a virtual disk, the system wrote a default disk label and created default partitioning. This is no longer the case with the Solaris 10 5/08 OS release, and you must run format(1M) in the guest domain to create partitions.



procedure icon  To Export a File as a Full Disk

  1. From the service domain, create a file (fdisk0 for example) to use as the virtual disk.


    service# mkfile 100m /ldoms/domain/test/fdisk0
    

    The size of the file defines the size of the virtual disk. This example creates a 100- megabyte blank file to get a 100-megabyte virtual disk.

  2. From the service domain, export the file as a virtual disk.


    service# ldm add-vdsdev /ldoms/domain/test/fdisk0 fdisk0@primary-vds0
    

    In this example, the slice option is not set, so the file is exported as a full disk.

  3. From the service domain, assign the disk (fdisk) to guest domain ldg1, for example.


    service# ldm add-vdisk fdisk fdisk0@primary-vds0 ldg1
    

  4. After the guest domain is started and running the Solaris OS, you can list the disk (c0d5, for example) and see that the disk is accessible and is a full disk; that is, a regular disk with 8 slices.


    ldg1# ls -1 /dev/dsk/c0d5s*
    /dev/dsk/c0d5s0
    /dev/dsk/c0d5s1
    /dev/dsk/c0d5s2
    /dev/dsk/c0d5s3
    /dev/dsk/c0d5s4
    /dev/dsk/c0d5s5
    /dev/dsk/c0d5s6
    /dev/dsk/c0d5s7
    

File or Volume Exported as a Single Slice Disk

If the slice option is set, then the file or volume is exported as a single slice disk. In that case, the virtual disk has only one partition (s0), which is directly mapped to the file or volume backend. The file or volume only contains data written to the virtual disk with no extra data like partitioning information or disk structure.

When a file or volume is exported as a single slice disk, the system simulates a fake disk partitioning which makes that file or volume appear as a disk slice. Because the disk partitioning is simulated, you do not create a partitioning for that disk.

procedure icon  To Export a ZFS Volume as a Single Slice Disk

  1. From the service domain, create a ZFS volume (zdisk0 for example) to use as a single slice disk.


    service# zfs create -V 100m ldoms/domain/test/zdisk0
    

    The size of the volume defines the size of the virtual disk. This example creates a 100-megabyte volume to get a 100-megabyte virtual disk.

  2. From the service domain, export the device corresponding to that ZFS volume, and set the slice option so that the volume is exported as a single slice disk.


    service# ldm add-vdsdev options=slice /dev/zvol/dsk/ldoms/domain/test/zdisk0 zdisk0@primary-vds0
    

  3. From the service domain, assign the volume (zdisk0) to guest domain ldg1, for example.


    service# ldm add-vdisk zdisk0 zdisk0@primary-vds0 ldg1
    

  4. After the guest domain is started and running the Solaris OS, you can list the disk (c0d9, for example) and see that the disk is accessible and is a single slice disk (s0).


    ldg1# ls -1 /dev/dsk/c0d9s*
    /dev/dsk/c0d9s0
    

Exporting Volumes and Backward Compatibility

Before the Solaris 10 5/08 OS release, the slice option did not exist, and volumes were exported as single slice disks. If you have a configuration exporting volumes as virtual disks and if you upgrade the system to the Solaris 10 5/08 OS, volumes are now exported as full disks instead of single slice disks. To preserve the old behavior and to have your volumes exported as single slice disks, you need to do either of the following:

  • Use the ldm set-vdsdev command in LDoms 1.0.3 software, and set the slice option for all volumes you want to export as single slice disks. Refer to the ldm man page or the Logical Domains (LDoms) Manager 1.0.3 Man Page Guide for more information about this command.

  • Add the following line to the /etc/system file on the service domain.


    set vds:vd_volume_force_slice = 1
    



    Note - Setting this tunable forces the export of all volumes as single slice disks, and you cannot export any volume as a full disk.



Summary of How Different Types of Backends Are Exported


Backend No Slice Option Slice Option Set
Disk (disk slice 2) Full disk[1] Single slice disk[2]
Disk slice (not slice 2) Single slice disk[3] Single slice disk
File Full disk Single slice disk
Volume, including ZFS, SVM, or VxVM Full disk Single slice disk

Guidelines

Using the Loopback File (lofi) Driver

It is possible to use the loopback file (lofi) driver to export a file as a virtual disk. However, doing this adds an extra driver layer and impacts performance of the virtual disk. Instead, you can directly export a file as a full disk or as a single slice disk. See File and Volume.

Directly or Indirectly Exporting a Disk Slice

To export a slice as a virtual disk either directly or indirectly (for example through a SVM volume), ensure that the slice does not start on the first block (block 0) of the physical disk by using the prtvtoc(1M) command.

If you directly or indirectly export a disk slice which starts on the first block of a physical disk, you might overwrite the partition table of the physical disk and make all partitions of that disk inaccessible.


CD, DVD and ISO Images

You can export a compact disc (CD) or digital versatile disc (DVD) the same way you export any regular disk. To export a CD or DVD to a guest domain, export slice 2 of the CD or DVD device as a full disk; that is, without the slice option.



Note - You cannot export the CD or DVD drive itself; you only can export the CD or DVD that is inside the CD or DVD drive. Therefore, a CD or DVD must be present inside the drive before you can export it. Also, to be able to export a CD or DVD, that CD or DVD cannot be in use in the service domain. In particular, the Volume Management file system, volfs(7FS) service must not use the CD or DVD. See To Export a CD or DVD From the Service Domain to the Guest Domain for instructions on how to remove the device from use by volfs.



If you have an International Organization for Standardization (ISO) image of a CD or DVD stored in file or on a volume, and export that file or volume as a full disk then it appears as a CD or DVD in the guest domain.

When you export a CD, DVD, or an ISO image, it automatically appears as a read-only device in the guest domain. However, you cannot perform any CD control operations from the guest domain; that is, you cannot start, stop, or eject the CD from the guest domain. If the exported CD, DVD, or ISO image is bootable, the guest domain can be booted on the corresponding virtual disk.

For example, if you export a Solaris OS installation DVD, you can boot the guest domain on the virtual disk corresponding to that DVD and install the guest domain from that DVD. To do so, when the guest domain reaches the ok prompt, use the following command.


ok boot /virtual-devices@100/channel-devices@200/disk@n:f

Where n is the index of the virtual disk representing the exported DVD.



Note - If you export a Solaris OS installation DVD and boot a guest domain on the virtual disk corresponding to that DVD to install the guest domain, then you cannot change the DVD during the installation. So you might need to skip any step of the installation requesting a different CD/DVD, or you will need to provide an alternate path to access this requested media.



procedure icon  To Export a CD or DVD From the Service Domain to the Guest Domain

  1. Insert the CD or DVD in the CD or DVD drive.

  2. From the service domain, check whether the volume management daemon, vold(1M), is running and online.


    service# svcs volfs
    STATE          STIME    FMRI
    online         12:28:12 svc:/system/filesystem/volfs:default
    

  3. Do one of the following.

    • If the volume management daemon is not running or online, go to step 5.

    • If the volume management daemon is running and online, as in the example in Step 2, do the following:

    1. Edit the /etc/vold.conf file and comment out the line starting with the following words.


      use cdrom drive....
      

      Refer to the vold.conf(1M) man page for more information.

    2. From the service domain, restart the volume management file system service.


      service# svcadm refresh volfs
      service# svcadm restart volfs
      

  4. From the service domain, find the disk path for the CD-ROM device.


    service# cdrw -l
    Looking for CD devices...
       Node                   Connected Device                 Device type
    ----------------------+--------------------------------+-----------------
    /dev/rdsk/c1t0d0s2    | MATSHITA CD-RW  CW-8124   DZ13 | CD Reader/Writer
    

  5. From the service domain, export the CD or DVD disk device as a full disk.


    service# ldm add-vdsdev /dev/dsk/c1t0d0s2 cdrom@primary-vds0
    

  6. From the service domain, assign the exported CD or DVD to the guest domain (ldg1 in this example).


    service# ldm add-vdisk cdrom cdrom@primary-vds0 ldg1
    

Exporting a CD or DVD Multiple Times

A CD or DVD can be exported multiple times and assigned to different guest domain. See To Export a Virtual Disk Backend Multiple Times for more information.


Virtual Disk Timeout

By default, if the service domain providing access to a virtual disk backend is down, all I/O from the guest domain to the corresponding virtual disk is blocked. The I/O automatically is resumed when the service domain is operational and is servicing I/O requests to the virtual disk backend.

However, there are some cases when file systems or applications might not want the I/O operation to block, but for it to fail and report an error if the service domain is down for too long. It is now possible to set a connection timeout period for each virtual disk, which can then be used to establish a connection between the virtual disk client on a guest domain and the virtual disk server on the service domain. When that timeout period is reached, any pending I/O and any new I/O will fail as long as the service domain is down and the connection between the virtual disk client and server is not reestablished.

This timeout can be set by doing one of the following:

Specify the timeout in seconds. If the timeout is set to 0, the timeout is disabled and I/O is blocked while the service domain is down (this is the default setting and behavior).

Alternatively, the timeout can be set by adding the following line to the /etc/system file on the guest domain.


set vdc:vdc_timeout = seconds



Note - If this tunable is set, it overwrites any timeout setting done using the ldm CLI. Also, the tunable sets the timeout for all virtual disks in the guest domain.




Virtual Disk and SCSI

If a physical SCSI disk or LUN is exported as a full disk, the corresponding virtual disk supports the user SCSI command interface, uscsi(7D) and multihost disk control operations mhd(7I). Other virtual disks, such as virtual disks having a file or a volume as a backend, do not support these interfaces.

As a consequence, applications or product features using SCSI commands (such as SVM metaset, or Solaris Cluster shared devices) can be used in guest domains only with virtual disks having a physical SCSI disk as a backend.



Note - SCSI operations are effectively executed by the service domain, which manages the physical SCSI disk or LUN used as a virtual disk backend. In particular, SCSI reservations are done by the service domain. Therefore, applications running in the service domain and in guest domains should not issue SCSI commands to the same physical SCSI disks; otherwise, this can lead to an unexpected disk state.




Virtual Disk and the format(1M) Command

The format(1M) command works in a guest domain with virtual disks exported as full disk. Single slice disks are not seen by the format(1M) command, and it is not possible to change the partitioning of such disks.

Virtual disks whose backends are SCSI disks support all format(1M) subcommands. Virtual disks whose backends are not SCSI disks do not support some format(1M) subcommands, such as repair and defect. In that case, the behavior of format(1M) is similar to the behavior of integrated development environment (IDE) disks.



Note - The format(1M) command crashes when you select a virtual disk that has an extensible firmware interface (EFI) disk label. Refer to Bug ID 6363316 in the Logical Domains (LDoms) 1.0.3 Release Notes.




Using ZFS With Virtual Disks

The following topics regarding using the Zettabyte File System (ZFS) with virtual disks on logical domains are described in this section:

Creating a Virtual Disk on Top of a ZFS Volume

The following procedure describes how to create a ZFS volume in a service domain and make that volume available to other domains as a virtual disk. In this example, the service domain is the same as the control domain and is named primary. The guest domain is named ldg1 as an example. The prompts in each step show in which domain to run the command.

procedure icon  To Create a Virtual Disk on Top of a ZFS Volume

  1. Create a ZFS storage pool (zpool).


    primary# zpool create -f tank1 c2t42d1
    

  2. Create a ZFS volume.


    primary# zfs create -V 100m tank1/myvol
    

  3. Verify that the zpool (tank1 in this example) and ZFS volume (tank/myvol in this example) have been created.


    primary# zfs list
            NAME                   USED  AVAIL  REFER  MOUNTPOINT
            tank1                  100M  43.0G  24.5K  /tank1
            tank1/myvol           22.5K  43.1G  22.5K  -
    

  4. Configure a service exporting tank1/myvol as a virtual disk.


    primary# ldm add-vdsdev options=slice /dev/zvol/dsk/tank1/myvol zvol@primary-vds0
    

  5. Add the exported disk to another domain (ldg1 in this example).


    primary# ldm add-vdisk vzdisk zvol@primary-vds0 ldg1
    

  6. On the other domain (ldg1 in this example), start the domain and ensure that the new virtual disk is visible (you might have to run the devfsadm command).

    In this example, the new disk appears as /dev/rdsk/c2d2s0.


    ldg1# newfs /dev/rdsk/c2d2s0
    newfs: construct a new file system /dev/rdsk/c2d2s0: (y/n)? y
    Warning: 4096 sector(s) in last cylinder unallocated
    Warning: 4096 sector(s) in last cylinder unallocated
    /dev/rdsk/c2d2s0: 204800 sectors in 34 cylinders of 48 tracks, 128 sectors
    100.0MB in 3 cyl groups (14 c/g, 42.00MB/g, 20160 i/g) super-block backups
    (for fsck -F ufs -o b=#) at: 32, 86176, 172320,
     
    ldg1# mount /dev/dsk/c2d2s0 /mnt
     
    ldg1# df -h /mnt
    Filesystem             size   used   avail capacity  Mounted on
    /dev/dsk/c2d2s0         93M   1.0M     82M       2%  /mnt
    



    Note - In this example, the ZFS volume is exported as single slice disk. You also can export a ZFS volume as full disk. Export a ZFS volume as full disk to partition the virtual disk or to install the Solaris OS on the virtual disk.



Using ZFS Over a Virtual Disk

The following procedure shows how to use ZFS directly from a domain on top of a virtual disk. You can create ZFS pools, file systems, and volumes over the top of virtual disks with the Solaris 10 OS zpool(1M) and zfs(1M) commands. Although the storage backend is different (virtual disks instead of physical disks), there is no change to the usage of ZFS.

Additionally, if you have an already existing ZFS file system, then you can export it from a service domain to use it in another domain.

In this example, the service domain is the same as the control domain and is named primary. The guest domain is named ldg1 as an example. The prompts in each step show in which domain to run the command.

procedure icon  To Use ZFS Over a Virtual Disk

  1. Create a ZFS pool (tank in this example), and then verify that it has been created.


    primary# zpool create -f tank c2t42d0
    primary# zpool list
    NAME                SIZE   USED  AVAIL   CAP   HEALTH  ALTROOT
    tank               43.8G   108K  43.7G    0%   ONLINE  -      
    

  2. Create a ZFS file system (tank/test in this example), and then verify that it has been created.

    In this example, the file system is created on top of disk c2t42d0 by running the following command on the service domain.


    primary# zfs create tank/test
    primary# zfs list
    NAME                   USED  AVAIL  REFER  MOUNTPOINT
    tank                   106K  43.1G  25.5K  /tank
    tank/test             24.5K  43.1G  24.5K  /tank/test
    

  3. Export the ZFS pool (tank in this example).


    primary# zpool export tank
    

  4. Configure a service exporting the physical disk c2t42d0s2 as a virtual disk.


    primary# ldm add-vdsdev /dev/rdsk/c2t42d0s2 volz@primary-vds0
    

  5. Add the exported disk to another domain (ldg1 in this example).


    primary# ldm add-vdisk vdiskz volz@primary-vds0 ldg1
    

  6. On the other domain (ldg1 in this example), start the domain and make sure the new virtual disk is visible (you might have to run the devfsadm command), and then import the ZFS pool.


    ldg1# zpool import tank
    ldg1# zpool list
    NAME            SIZE    USED    AVAIL   CAP   HEALTH   ALTROOT
    tank           43.8G    214K    43.7G    0%   ONLINE   -      
     
    ldg1# zfs list
    NAME                   USED  AVAIL  REFER  MOUNTPOINT
    tank                   106K  43.1G  25.5K  /tank
    tank/test             24.5K  43.1G  24.5K  /tank/test
     
    ldg1# df -hl -F zfs
    Filesystem             size   used  avail capacity  Mounted on
    tank                    43G    25K    43G     1%    /tank
    tank/test               43G    24K    43G     1%    /tank/test
    

    The ZFS pool (tank/test in this example) is now imported and usable from domain ldg1.

Using ZFS for Boot Disks

You can use a ZFS file system with a large file as the virtual disks in logical domains.



Note - A ZFS file system requires more memory in the service domain. Take this into account when configuring the service domain.



ZFS enables:

procedure icon  To Use ZFS for Boot Disks

You can use the following procedure to create ZFS disks for logical domains, and also snapshot and clone them for other domains.

  1. On the primary domain, reserve a entire disk or slice for use as the storage for the ZFS pool. Step 2 uses slice 5 of a disk.

  2. Create a ZFS pool; for example, ldomspool.


    # zpool create ldomspool /dev/dsk/c0t0d0s5
    

  3. Create a ZFS file system for the first domain (ldg1 in this example).


    # zfs create ldomspool/ldg1
    

  4. Create a file to be the disk for this domain.


    # mkfile 1G /ldomspool/ldg1/bootdisk
    

  5. Specify the file as the device to use when creating the domain.


    primary# ldm add-vdsdev /ldomspool/ldg1/bootdisk vol1@primary-vds0
    primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldg1
    

  6. Boot domain ldg1 and net install to vdisk1. This file functions as a full disk and can have partitions; that is, separate partitions for root, usr, home, dump, and swap.

  7. Once the installation is complete, snapshot the file system.


    # zfs snapshot ldomspool/ldg1@initial
    



    Note - Doing the snapshot before the domain reboots does not save the domain state as part of the snapshot or any other clones created from the snapshot.



  8. Create additional clones from the snapshot and use it as the boot disk for other domains (ldg2 and ldg3 in this example).


    # zfs clone ldomspool/ldg1@initial ldomspool/ldg2
    # zfs clone ldomspool/ldg1@initial ldomspool/ldg3
    

  9. Verify that everything was created successfully.


    # zfs list
       NAME                       USED  AVAIL  REFER  MOUNTPOINT     
       ldomspool                 1.07G  2.84G  28.5K  /ldomspool     
       ldomspool/ldg1            1.03G  2.84G  1.00G  /ldomspool/ldg1
       ldomspool/ldg1@initial    23.0M      -  1.00G  -              
       ldomspool/ldg2            23.2M  2.84G  1.00G  /ldomspool/ldg2
       ldomspool/ldg3            21.0M  2.84G  1.00G  /ldomspool/ldg3
    



    Note - Ensure that the ZFS pool has enough space for the clones that are being created. ZFS uses copy-on-write and uses space from the pool only when the blocks in the clone are modified. Even after booting the domain, the clones only use a small percentage needed for the disk (since most of the OS binaries are the same as those in the initial snapshot).




Using Volume Managers in a Logical Domains Environment

The following topics are described in this section:

Using Virtual Disks on Top of Volume Managers

Any Zettabyte File System (ZFS), Solaris Volume Manager (SVM), or Veritas Volume Manager (VxVM) volume can be exported from a service domain to a guest domain as a virtual disk. A volume can be exported either as a single slice disk (if the slice option is specified with the ldm add-vdsdev command) or as a full disk.



Note - The remainder of this section uses an SVM volume as an example. However, the discussion also applies to ZFS and VxVM volumes.



The following example shows how to export a volume as a single slice disk. For example, if a service domain exports the SVM volume /dev/md/dsk/d0 to domain1 as a single slice disk, and domain1 sees that virtual disk as /dev/dsk/c0d2*, then domain1 only has an s0 device; that is, /dev/dsk/c0d2s0.

The virtual disk in the guest domain (for example, /dev/dsk/c0d2s0) is directly mapped to the associated volume (for example, /dev/md/dsk/d0), and data stored onto the virtual disk from the guest domain are directly stored onto the associated volume with no extra metadata. So data stored on the virtual disk from the guest domain can also be directly accessed from the service domain through the associated volume.

Examples:

Using Virtual Disks on Top of SVM

When a RAID or mirror SVM volume is used as a virtual disk by another domain, then it has to be exported without setting the exclusive (excl) option. Otherwise, if there is a failure on one of the components of the SVM volume, then the recovery of the SVM volume using the metareplace command or using a hot spare does not start. The metastat command sees the volume as resynchronizing, but the resynchronization does not progress.

For example, /dev/md/dsk/d0 is a RAID SVM volume exported as a virtual disk with the excl option to another domain, and d0 is configured with some hot-spare devices. If a component of d0 fails, SVM replaces the failing component with a hot spare and resynchronizes the SVM volume. However, the resynchronization does not start. The volume is reported as resynchronizing, but the resynchronization does not progress.


# metastat d0
d0: RAID
    State: Resyncing
    Hot spare pool: hsp000
    Interlace: 32 blocks
    Size: 20097600 blocks (9.6 GB)
Original device:
    Size: 20100992 blocks (9.6 GB)
Device                                     Start Block  Dbase   State Reloc
c2t2d0s1                                           330  No       Okay  Yes
c4t12d0s1                                          330  No       Okay  Yes
/dev/dsk/c10t600C0FF0000000000015153295A4B100d0s1  330  No  Resyncing  Yes

In such a situation, the domain using the SVM volume as a virtual disk has to be stopped and unbound to complete the resynchronization. Then the SVM volume can be resynchronized using the metasync command.


# metasync d0

Using Virtual Disks When VxVM Is Installed

When the Veritas Volume Manager (VxVM) is installed on your system, and if Veritas Dynamic Multipathing (DMP) is enabled on a physical disk or partition you want to export as virtual disk, then you have to export that disk or partition without setting the excl option. Otherwise, you receive an error in /var/adm/messages while binding a domain that uses such a disk.


vd_setup_vd():  ldi_open_by_name(/dev/dsk/c4t12d0s2) = errno 16
vds_add_vd():  Failed to add vdisk ID 0

You can check if Veritas DMP is enabled by checking multipathing information in the output of the command vxdisk list; for example:


# vxdisk list Disk_3
Device:    Disk_3
devicetag: Disk_3
type:      auto
info:      format=none
flags:     online ready private autoconfig invalid
pubpaths:  block=/dev/vx/dmp/Disk_3s2 char=/dev/vx/rdmp/Disk_3s2
guid:      -
udid:      SEAGATE%5FST336753LSUN36G%5FDISKS%5F3032333948303144304E0000
site:      -
Multipathing information:
numpaths:  1
c4t12d0s2  state=enabled

Alternatively, if Veritas DMP is enabled on a disk or a slice that you want to export as a virtual disk with the excl option set, then you can disable DMP using the vxdmpadm command. For example:


# vxdmpadm -f disable path=/dev/dsk/c4t12d0s2

Using Volume Managers on Top of Virtual Disks

This section describes the following situations in the Logical Domains environment:

Using ZFS on Top of Virtual Disks

Any virtual disk can be used with ZFS. A ZFS storage pool (zpool) can be imported in any domain that sees all the storage devices that are part of this zpool, regardless of whether the domain sees all these devices as virtual devices or real devices.

Using SVM on Top of Virtual Disks

Any virtual disk can be used in the SVM local disk set. For example, a virtual disk can be used for storing the SVM metadevice state database, metadb(1M), of the local disk set or for creating SVM volumes in the local disk set.

Any virtual disk whose backend is a SCSI disk can be used in a SVM shared disk set, metaset(1M). Virtual disks whose backends are not SCSI disks cannot be added into a SVM share disk set. Trying to add a virtual disk whose backend is not a SCSI disk into a SVM shared disk set fails with an error similar to the following.


# metaset -s test -a c2d2
metaset: domain1: test: failed to reserve any drives

Using VxVM on Top of Virtual Disks

For VxVM support in guest domains, refer to the VxVM documentation from Symantec.


1 (Table Footnote) Export the entire disk.
2 (Table Footnote) Export only slice 2
3 (Table Footnote) A slice is always exported as a single slice disk.