Enter your Email


BTOI | © copyight


Feb 22, 2010
Solaris LDOM Virtualization
I have created LDOM on Sun Blade T6320 server using the below given procudure. Please use this as a beginners guide for your LDOM creation. Refer Sun online documentation on LDOM for more information on this.
-------------------------------------------------------------------------------------

Sun Logical Domains or LDoms is a full virtual machine that runs an independent operating system instance and contains virtualized CPU, memory, storage, console, and cryptographic devices. This technology allows you to allocate a system resources into logical groupings and create multiple, discrete systems, each with their own operating system, resources, and identity within a single computer system. We can run a variety of applications software in different logical domains and keep them independent of performance and security purposes. The LDoms environment can help to achieve greater resource usage, better scaling, and increased security and isolation.

Logical & Control domain: The control domain communicates with the hypervisor to create and manage all logical domain configurations within a server platform. The Logical Domains Manager is used to create and manage logical domains. The Logical Domains Manager maps logical domains to physical resources. Without access to the Logical Domains Manager all logical domain resource levels remain static. The initial domain created when installing Logical Domains software is a control domain and is named primary.

You can download Logical Domain manager from http://www.sun.com/servers/coolthreads/ldoms/index.jsp.
Please read the release notes for system firmware requirements and patch requirements. By default, Ldoms software gets installed to /opt/SUNWldm/. Make sure the below commands works - and that confirms Logical domain manager is running.

primary-control01# /opt/SUNWldm/bin/ldm list

Name State Flags Cons VCPU Memory Util Uptime
primary active -t-cv SP 32 16128M 49% 90mm

Creating default services: You need to create the default virtual services that the control domain uses to provide disk services, console access and networking. The below commands explains them.

Create Virtual Disk server(vds) : Virtual disk server helps importing virtual disks into a logical domain from the control domain.
primary-control01# ldm add-vds primary-vds0 primary

Create Virtual Console concentrator Server(vcc) : Virtual Console concentrator server provides terminal service to logical domain consoles.
primary-control01# ldm add-vcc port-range=5000-5100 primary-vcc0 primary

Create Virtual Switch server(vsw) : Virtual Switch server enables networking between virtual network devices in logical domains.
primary-control01# ldm add-vsw net-dev=e1000g0 primary-vsw0 primary


List the default services created
primary-control01# ldm list-services primary
VDSNAME VOLUME OPTIONS DEVICE
primary-vds0
VCCNAME PORT-RANGE
primary-vcc0 5000-5100
VSWNAME MAC NET-DEV DEVICE MODE
primary-vsw0 00:11:5a:12:dc:fc e1000g0 switch@0 prog,promisc


Control Domain Creation: The next step is to perform the initial setup of the primary domain, which will act as the control domain. You should specify the resources that the primary domain will use and what will be released for use by other guest domains. In this document, we are creating the control domain with 4 cpu's and 4GB RAM.
primary-control01# ldm set-mau 1 primary

primary-control01# ldm set-vcpu 4 primary
primary-control01# ldm set-memory 4g primary

Now, set these modified configuration permanent using list-spconfig option.


primary-control01# ldm list-spconfig

factory-default [current]
primary-control01# ldm add-spconfig initial
primary-control01# ldm list-spconfig
factory-default [current]
initial [next poweron]
Reboot the server and it will come up with initial configuration.


Logical Domain Creation : Now that the system is ready, prepare and plan for the logical domain configuration. In this document, we are creating a logical domain with 4 CPUs and 8GB memory and "guest1" is the name.
primary-control01# ldm add-domain guest1

primary-control01# ldm add-vcpu 4 guest1
primary-control01# ldm add-memory 8G guest1
primary-control01# ldm add-vnet vnet1 primary-vsw0 guest1
primary-control01# ldm add-vdsdev /dev/dsk/c1t2d0s2 vol1@primary-vds0
primary-control01# ldm add-vdisk vdisk1 vol1@primary-vds0 guest1
primary-control01# ldm add-vdsdev /image/sol-10-u8-ga-sparc-dvd.iso iso_vol@primary-vds0
primary-control01# ldm add-vdisk cdrom iso_vol@primary-vds0 guest1

primary-control01# ldm set-var auto-boot\?=false guest1
primary-control01# ldm bind guest1

primary-control01# ldm start-domain guest1

You will be able see the domain using "ldm list-domain"
primary-control01# ldm list-domain

NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv SP 4 4G 0.2% 3h 4m
guest1 inactive ----- 4 8G

Connect to the logical domain console by telneting to the virtual console port.
primary-control01# telnet localhost 5000

Trying 127.0.0.1...Connected to localhost....
Escape character is ’^]’.Connecting to console "guest1" in group "guest1" ....
Press ~? for control options ..
{0} ok
{0} ok boot cdrom


You will go through normal Solaris CDROM installation procedure. Customize the installation according to your requirement.

Refer
http://www.sun.com/servers/coolthreads/ldoms/index.jsp for more details.

Labels: ,

Feb 8, 2009
Volume Layouts in VxVM...

 

 

Non Layered Volumes

In a non-layered volume, a subdisk is restricted to mapping directly to a VM disk

Layered Volumes

A layered volume is a virtual Veritas Volume Manager object that is built on top of other volumes by mapping its subdisks to underlying volumes. Hence it is called ‘volume on volume’.

 

Volume layouts supported in VxVM

Concatenation

Concatenation maps data in a linear manner onto one or more subdisks in a plex. To access all of the data in a concatenated volume sequentially, data is first accessed in the first subdisk from beginning to end. Data is then accessed in the remaining subdisks sequentially from beginning to end, until the end of the last subdisk.

Striping

Striping maps data so that the data is interleaved among two or more physical disks. A striped plex contains two or more subdisks, spread out over two or more physical disks. Data is allocated alternately and evenly to the subdisks of a striped plex.

Mirroring (RAID-1)

Striping maps data so that the data is interleaved among two or more physical disks. A striped plex contains two or more subdisks, spread out over two or more physical disks. Data is allocated alternately and evenly to the subdisks of a striped plex.

Striping plus mirroring (mirrored-stripe or RAID-0+1)

The combination of mirroring above striping is called a mirrored-stripe layout.

Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10)

The combination of striping above mirroring. This combined layout is called a striped-mirror layout.

RAID-5 (striping with parity)

RAID-5 provides data redundancy by using parity. Parity is a calculated value used to reconstruct data after a failure. The data and calculated parity are contained in a plex that is “striped” across multiple disks

Changing Nodename in HP-UX

Below are the steps to change the nodename in HP-UX:

HP-UX ap21cnedc101 # uname -a

HP-UX ap21cned B.11.23 U ia64 2222362797 unlimited-user license

HP-UX ap21cnedc101 # kctune expanded_node_host_names=1

WARNING: The automatic 'backup' configuration currently contains the

         configuration that was in use before the last reboot of this

         system.

     ==> Do you wish to update it to contain the current configuration

         before making the requested change? y

       * The automatic 'backup' configuration has been updated.

WARNING: Setting the expanded_node_host_names parameter to 1 will allow

         administrators to set node and host names larger than 8 and 64

         characters/bytes, respectively.  It is strongly recommended

         that all the documentation included with the NodeHostNameXpnd

         product bundle be understood before setting larger names.

         Larger names can cause some applications which use those names

         to exhibit anomalous or incorrect behavior.

       * The requested changes have been applied to the currently

         running system.

Tunable                             Value  Expression  Changes

expanded_node_host_names  (before)      0  Default     Immed

                          (now)         1  1

HP-UX ap21cnedc101 # /sbin/set_parms hostname

_______________________________________________________________________________

For the system to operate correctly, you must assign it a unique

system name or "hostname".  The hostname can be a simple name

(example: widget) or an Internet fully-qualified domain name

(example: widget.region.mycorp.com).

A simple name, or each dot (.) separated component of a domain name, must:

    * Start and end with a letter or number.

    * Contain no more than 63 characters per component.

    * Contain no more than 255 total characters.

    * Contain only letters, numbers, underscore (_), or dash (-).

      The underscore (_) is not recommended.

NOTE: The first or only component of a hostname should contain no more

      than 8 characters and the full hostname should contain no more

      than 63 characters for maximum compatibility with HP-UX software.

The current hostname is ap21cnedc101.

_______________________________________________________________________________

Enter the system name, then press [Enter] or simply press [Enter]

to retain the current host name (ap21cnedc101):

The hostname (or first component of the hostname) "ap21cnedc101"

contains more than 63 characters.  This is valid, but the system

name as reported by `uname' will be truncated to "ap21cnedc101                                                                                                                 ".

Press [Enter] to continue...

You have chosen ap21cnedc101 as the name for this system.

Is this correct?

Press [y] for yes or [n] for no, then press [Enter] y

_______________________________________________________________________________

  Working...

_______________________________________________________________________________

HP-UX ap21cnedc101 # uname -a

HP-UX ap21cnedc101 B.11.23 U ia64 2222362797 unlimited-user license

Auditing Users in HP-UX 11.31

Auditing Users

By default, when system auditing is on, the audit status for all users is on. New users added to the system are automatically audited.

You can monitor what users are doing on HP-UX systems using the auditing. To change which users are audited, choose one of the following options:

Audit all users.

By default, audit status for all users is set to on when the audit system is turned

on. New users added to the system are automatically audited.

If auditing is turned off for all users, set AUDIT_FLAG=1 in the /etc/default/security file.

Do not audit any users.

To turn off auditing for all users, follow these steps:

1. Check to see which users are already being audited. To check, follow these steps:

a. Check the AUDIT_FLAG setting in the /etc/default/security file.

b. Check the AUDIT_FLAG setting stored in the user database using the following command:

# userdbget -a AUDIT_FLAG

2. Set AUDIT_FLAG=0 in the /etc/default/security file.

Audit specific users.

To configure auditing for specific users, follow these steps:

1. Deselect auditing for all users by setting the AUDIT_FLAG=0 in the /etc/default/security file.

2. Configure auditing for a specific user using the following command:

# /usr/sbin/userdbset -u user-name AUDIT_FLAG=1.

If the audit system is not already enabled, use the audsys -n command to start the auditing system. Auditing changes take effect at the user's next login.

The audited information can be viewed in the audit log files which can be created as below:

Configuring Audit Trails

Use the audsys command to specify the primary audit log file to collect auditing data:

#audsys -n -N2 -c my_audit_trail -s 5000

This example starts the audit system and records data in the my_audit_trail directory, using two writer threads. The size is set to 5000K bytes.

VERITAS Volume Manager Objects

 

VxVM uses two types of objects to handle storage management: physical objects and virtual objects.

 

Physical objects - Physical disks or other hardware which interface with block and raw operating system device.

Ex. – Disk and Disk arrays.

 

Virtual objects - VxVM, creates virtual objects called volumes on physical disks. Volumes are logical entity which are accessed by file systems, databases, or other applications in the same way that physical disks are accessed. Volumes are also composed of other virtual objects (plexes and subdisks) that are used in changing the volume configuration.

Ex. Volumes, plexes, subdisks, Disk-groups.

 

VERITAS Volume Manager Daemons

 

VxVM relies on the following constantly-running daemons for its operation:

 

■ vxconfigd—The VxVM configuration daemon maintains disk and group configurations. It communicates configuration changes to the kernel, and modifies configuration information stored on disks.

■ vxiod—VxVM I/O kernel threads provide extended I/O operations. By default, 16 I/O threads are started at boot time, and at least one I/O thread must continue to run at all times.

■ vxrelocd—The hot-relocation daemon monitors VxVM for events that affect redundancy, and performs hot-relocation to restore redundancy.

Converting a nolargefiles HFS file system to largefiles HFS file system in HP-UX.

Below is the procedure to convert a HFS file system from nolargefiles to largefiles.

Difference between nolargefiles and largefiles:

A file system which has been created with nolargefiles option is not capable of storing any files which are large than 2 GB.

To do so, we need to mention the largefiles option while creating the file system as below:

#newfs -F hfs -o largefiles /dev/vg02/rlvol1

The default is nolargefiles. If no option is mentioned, the maximum file size in the file system will be limited to 2 GB.

If the file system is already created with nolargefiles option, it can be changed to largefiles as below:

1.      Unmount the file system.

2.      Convert the file system to largefiles using the below command:

fsadm -F hfs -o largefiles /dev/vg03/rlvol1

3.      mount the file system back.