Sunday, November 14, 2010

My **MyBookLive** Notes

My old MyBook World 1 TB NAS drive is dying. It froze on a daily basis. So I bought this new external drive called MyBookLive 1 TB. Nice device, it has a GigE interface but no USB.

Most cases, the first thing I do on any smart device (running linux - like a smartphone) is to hack into it, enable SSH and a bunch of stuff I want to. And then look for guidance in the web about removing some unwanted processes ( like the Mionet Java on MyBook world - replacing it with a thin apache ).


Although a few notes for the external network drive users ,

a) Enable the web password. i.e. http://mybooklive , in the Users page , set a password for admin user.
b) Disable DHCP and set a static address. I did this on the Mybookworld to make NFS work all the time. But this time around, I think this is another way to secure the device. Static IP choice gives you an option to mention a DNS server - I entered a bad address here .. Something in the non-routable IP segment and never up. This way, mybookworld live will never be able to resolve a URL's FQDN thus preventing any hacker code to connect the internet unless they use IPs instead of names.
c) Setup a firewall policy on your router - It varries depending on what kind of Wireless router you are plugged into. Linkys can have a policy to block all internet access to a given IP address, I do this for my Mybook live system.

Upgrading firmware ::

You will have to Undo both Step "b" and Step "c" to upgrade the firmware using the web interface. A better option will be to download the firmware from WD website onto your laptop and apply from there.

Basically, don't let your External NAS at home get out of your LAN. Put them in your local Subnet Jail.

I just learned ( Thanks bloggers ) a trick to enable SSH

Goto http://mybookworld OR the IP. go to  Settings -> Utilities -> Import / Export Current Configuration

Export it to a text file, open it from unix, edit and enable this line

ssh_enable="disabled", change to ssh_enable="enabled"

Once done, import this configuration file. This will reboot the system, now first thing you want to do is login as root/welc0me using an ssh client and change the root password to something only you know.

Have fun ......

Samba on Mybook live isn't as stable as  the one I had on the Mybook world. I am looking for a stable version. Once done, I will update the blog on directions to upgrade to the new version.

Wednesday, November 21, 2007

Website started.

Checkout www.kapilraj.com

Saturday, June 30, 2007

iSCSI on AIX

iSCSI (Internet Small Computer Systems Interface) is a transport protocol for sending SCSI packets over TCP/IP networks. iSCSI initiators and iSCSI targets are the key components in iSCSI architecture. Initiators and targets are devices (software or hardware) that package and transfer SCSI information over an IP network. An iSCSI initiator encapsulates SCSI commands, data, and status information in iSCSI packets and sends the packets to an iSCSI target residing on a storage device.

Terminology:-

iQN : IQN (iSCSI qualified name): A naming standard supported by the iSCSI protocol. IQN names are globally unique and in the form of iqn. followed by a date and a Reversed domain name.

Initiator : iSCSI initiator: An iSCSI endpoint, identified by a unique iSCSI name, which begins an iSCSI session by issuing a command to the other endpoint (the target). iSCSI Initiators could be hardware (HBA) or a software.

Target : iSCSI target: An iSCSI endpoint, identified commands issued by the iSCSI initiator. iSCSI target runs at the storage end identified a iQN.


AIX iSCSI Driver ( Initiator )

AIX has an inbuilt iSCSI driver which can be used to configure targets and thus allows the system to use iSCSI LUNs configured on the storage device. Please consult with the Storage manufacturers documentation to check what is the supported configuration. EMC documents that only software iSCSI initiator is supported on AIX for EMC products. The first iSCSI protocol adapter can be identified as /dev/iscsi0 on AIX.

The AIX iSCSI driver can use the following authentication methods while connecting to a target.

* CHAP ( Challenge Handshake Authentication Protocol )
* MD5 ( Message Digest )

/etc/iscsi/targets file configures each configured target that an AIX system’s iSCSI driver will use. There is also an option to store this information in the ODM.


Configuring iSCSI on AIX


AIX iSCSI driver can be configured using the smitty fast path “smitty iscsi” .

Select iSCSI Protocol Device è

è Change / Show Characteristics of an iSCSI Protocol Device

Select the iscsi adapter

iSCSI Protocol Device iscsi0

Description iSCSI Protocol Device

Status Available

iSCSI Initiator Name [iqn.com.xxxxxxx]

Maximum number of commands to queue to driver [200]

Discovery Policy file

Maximum Targets Allowed [16]

Apply change to DATABASE only no


Use the following parameters based on the standards and practices we follow. Use the default name (IQN) for the iSCSI initiator. Use the number of targets that we intend to configure to this system.

Once done, Send the initiator IQN name to the SAN administrator so that it can be used to define the targets they create on the storage box.

Configuring the LUNs on AIX.

We need the following details from the storage team to configure the iSCSI LUNs.

iSCSI target iQN name # Use “iscsi” to detect all the targets configured

IP Address of the storage unit # IP Address of the storage unit.

Valid Port # Default is 3260 but we need to confirm this one

Create a new line in /etc/iscsi/targets file in the following syntax

ip_addr_storage valid_port iqn_name_of_target

Once this configuration is done , save the file. Please do not change the permission of /etc/iscsi directory or /etc/iscsi/targets file.

Run cfgmgr to define the iSCSI LUNs

cfgmgr –vl iscsi0

A new iSCSI LUN should look like the following

/root>lsdev -Ccdisk | grep -i iscsi

hdisk2 Available Other iSCSI Disk Drive
/root>lsattr -El hdisk2

clr_q no Device CLEARS its Queue on error True

host_addr xx.xx.xx.xx Hostname or IP Address False

location Location Label True

lun_id 0x0 Logical Unit Number ID False

max_transfer 0x40000 Maximum TRANSFER Size True

port_num 0xcbc PORT Number False

pvid 000c7ef2a7f1ed190000000000000000 Physical volume identifier False

q_err yes Use QERR bit True

q_type simple Queuing TYPE True

queue_depth 1 Queue DEPTH True

reassign_to 120 REASSIGN time out value True

rw_timeout 30 READ/WRITE time out value True

start_timeout 60 START unit time out value True

target_name xxxxxx.com.xxxxxxx:volume-1 Target NAME False



AIX Recommendations ( iSCSI )

Create LVM volume groups with ‘Auto Varyon = No ‘. The reason being the varyonvg function is performed much before the tcp/ip subsystem is configured. iSCSI being dependant on tcp/ip it needs to be varied on after the tcp/ip subsystem is started.

chvg –a n iscsivg01

Filesystems be configured “AutoMount = False”.

crfs –v jfs2 …….. –A no

The best way to do this would be to have a “Type” allocated for the iSCSI filesystems and have a corresponding /etc/inittab entry for mounting the filesystems once the device is ready.

crfs –v jfs2 –d iscsilv01 –m /iscsifs01 –A no –u iscsi –a log=INLINE

/etc/inittab calls a script after the tcpip subsystem started ,

cfgmr –l iscso0

varyonvg iscsivg01

Mount –t iscsi

Never span volume groups across non-iSCSI drives

Configuring iSCSI volumes during system startup

I was not able to locate a documented standard process for automatically activating iSCSI volumes from IBM. So I have implemented a *close to standard procedure to get the iSCSI volumes are configured while the system boot.

The following script will check and configure the iSCSI volumes. It has to be configured in the inittab and the script be ported in the correct location.

mkitab "iscsivg:2:boot:/etc/rc.iscsivg >/dev/null 2>/dev/null"

This makes it run only once by the init during system start up. It runs when the system moves to Run Level 2. init does not wait for it’s completion. The script is intelligent enough to send out an alert should anything go wrong. The detailed log message is printed out to /var/tmp/iscsivg.log.

cat /etc/rc.iscsivg

#!/usr/bin/ksh
#
# Configure iSCSI Volume groups
# Author KapilRaj
# Date 2/14/2007
#
# Who When What

export DEBUG=No
function set_env {
[ ${DEBUG} = "Yes" ] && set -x
target=`grep -v ^# /etc/iscsi/targets | awk '{print $1}'`
export status=1
export tmpfile1=/tmp/iscsivg1.$$
export tmpfile2=/tmp/iscsivg2.$$
export logfile=/var/tmp/iscsivg.log
export email_id="kkoroth@domain_name.com"
export err_cond=0
[ -f ${logfile} ] && mv ${logfile} ${logfile}.`date +%H%M%S%m%d%y` # Rotate the log
}

function check_target {
[ ${DEBUG} = "Yes" ] && set -x
until [ ${status} -eq 0 ]
do
ping -c1 ${target} 1>/dev/null 2>/dev/null
export status=$?
sleep 30
done
}

function cfg_iscsi {
[ ${DEBUG} = "Yes" ] && set -x
cfgmgr -vl iscsi0 1>>${logfile} 2>>${logfile}
sleep 10
lsdev -Ccdisk | grep -i iscsi |awk '{print $1}' > ${tmpfile1}
for iscsivg in `lspv | grep -wf ${tmpfile1} | awk '{print $3}'|sort -u`
do
varyonvg ${iscsivg} 1>>${logfile} 2>>${logfile}
if [ $? -ne 0 ]
then
echo "${iscsivg} can not be varied on !! " >> ${logfile}
export err_cond=1
else
lsvgfs ${iscsivg} >> ${tmpfile2}
fi
done
for iscsifs in `cat ${tmpfile2} | sort`
do
mount ${iscsifs} 1>>${logfile} 2>>${logfile}
if [ $? -ne 0 ]
then
echo "${iscsifs} can not be mounted !! " >> ${logfile}
export err_cond=1
fi
done
}
function cleanup_alert {
[ ${DEBUG} = "Yes" ] && set -x
[ ${err_cond} -eq 1 ] && cat ${logfile} | mail -s "Errors while configuring iSCSI on `uname -n`" ${email_id}
rm -rf ${tmpfile1}
rm -rf ${tmpfile2}
}
# Do the stuff
[ ${DEBUG} = "Yes" ] && set -x
set_env
check_target
cfg_iscsi
cleanup_alert


Security Related information

The /etc/iscsi/directory file and the /etc/iscsi/targets configuration file are protected from non-privileged users through file permission and ownership. CHAP secrets are saved in the /etc/iscsi/targets file as clear text.

Note:

Do not to change the original file permission and ownership of these files.

Network tuning

To ensure the best performance:

* Enable the TCP Large Send, TCP send and receive flow control, and Jumbo Frame features of the AIX Gigabit Ethernet Adapter and the iSCSI Target interface.
* Tune network options and interface parameters for maximum iSCSI I/O throughput on the AIX system

o Enable the RFC 1323 network option.
o Set up the tcp_sendspace, tcp_recvspace, sb_max, and mtu_size network options and network interface options to appropriate values.

The iSCSI Software Initiator's maximum transfer size is 256KB. Assuming that the system maximums for tcp_sendspace and tcp_recvspace are set to 262144 bytes, an ifconfig command used to configure a gigabit Ethernet interface might look like the following:

ifconfig en2 xx.xx.xx.xx mtu 9000 tcp_sendspace 262144 tcp_recvspace 262144

o Set the sb_max network option to at least 524288, and preferably 1048576.
o Set the mtu_size to 9000.

Sunday, June 24, 2007

HP APA Auto Port Aggregation

Friends,

So here I am once again with notes from another wonderful product from Hewlett-Packard. This product can be used to trunk two ports [ load balancing ] OR to have a fail over adapter. Similar products are available from Solaris and AIX as well. They are named IPMP and Ether-Channel respectively. I have not worked on IPMP but Ether-Channel.

For those who have never heard of such a product, This product will offer load balancing OR fail over for Ethernet adapters running TCP/IP. i.e. It can be used to run a single IP address over two Ethernet interfaces connected to two different switch ports ( to get 2*bandwidth) AND to run a single ip address active / passive mode.

Here I am going to discuss about the fail over scenario.

Requirement :-

lan0 and lan1 on this hpux system has been connected to two different switches. The switch ports are configured to be on the same VLAN. lan0 runs the ip address primarily and in case lan0, the cable or the switch port fail, the ip address should be re-located to lan1 w/o any clients knowing that such a change has happened.

Basically there are 2 configuration files for APA used for failover.

/etc/rc.config.d/hp_apaportconf
/etc/lanmon/lanconfig.ascii

1) . Configure the ipaddress on the primary adapter on the hpux system (/etc/rc.config.d/netconf )
2). Configure /etc/rc.config.d/hp_apaportconf as follows, This file should have all the adapters involved in the APA groups.

HP_APAPORT_INTERFACE_NAME[0]=lan0

HP_APAPORT_CONFIG_MODE[0]=LAN_MONITOR

HP_APAPORT_INTERFACE_NAME[1]=lan1

HP_APAPORT_CONFIG_MODE[1]=LAN_MONITOR

3) Configure /etc/lanmon/lanconfig.ascii as follows,

NODE_NAME # Replace this field with the real hostname of the system.


POLLING_INTERVAL 10000000

DEAD_COUNT 3

LM_RAPID_ARP off

LM_RAPID_ARP_INTERVAL 1000000

LM_RAPID_ARP_COUNT 10

FAILOVER_GROUP lan900 # First failover group (Public network)

STATIONARY_IP xx.xx.xx.xx

PRIMARY lan0 5

STANDBY lan4 3


... Phew ....

Stop and start the APA using the following commands.

/sbin/init.d/hplm stop

/sbin/init.d/hpapa stop

/sbin/init.d/hpapa start

/sbin/init.d/hplm start


Notes :

1) Before the configuration, test the ports are configured to the same speed and duplex settings ( lanadmin )
2) Try configuring ip address temporarily on each adapter and make sure that you can ping out to other hosts in the subnet. You may use linkloop command for this , but some switches don't support this command.

3) When APA is running, you will no longer see the physical adapter, It will show lan900 running the ip address.
4) If there is a stacked ip address on the primary adapter, That also will be moved during a failover. The limit for stacked ip addresses are 9.
5) Put your questions in comments and I will be more than willing to help you to my best possible level.

Regards, Kaps

Tuesday, June 12, 2007

VxVM Basic Tasks

So here I am with a little more knowledge about this wonderful Volume Manager.

This information is dedicated for HP ITRC Forum who helped me their best. And this information is true for VXVM implementations on HPUX.

Defining a disk for use of VXVM
a). /etc/vx/bin/vxdisksetup -i /dev/cXtXdX

Creating a Disk Group
a). vxdg init disk01=cXtXdX

Extend an existing Disk Group
a). vxdg -g adddisk disk02=cXtXdX

Create a Volume on a disk group
a). vxassist -g make 10g

Create a Volume on a disk group on a specific disk
a). vxassist -g 1g alloc=disk20

Create a Striped Volume
a). vxassist -g make stripevol 20g layout=striped ncols=4 stwidth=128
Create a Filesystem on a vol
a). mkfs -F vxfs-o bsize=8192,largefiles /dev/vx/rdsk//oravol01
Update the /etc/filesystems
a) vi /etc/fstab
Display a volume Group
a). vxprint -g disk_group
Display a Volume
a). vxprint -ht vol_name
Increase a filesystem
a). vxresize -b -F vxfs -t homevolresize homevol 10g disk10 disk11


Mirroring a rootdg
a). vxdisksetup -iB cxtydz (To initialise disk under Volume manager)
b). vxdg -g rootdg adddisk rootdisk=cxtydz ( To add the disk to the Disk Group)
c). vxrootmir -v -b -R cxtydz ( To create the mirror)

Wednesday, June 06, 2007

VxVM

So here I am , having got a chance to work with VxVM in depth. I thought of sharing some concepts I understood so far. It could have minor mistakes .....

As I am very comfortable working on LVM ( like most SAs ) , I think it makes more sense to compare them to the new product.

Basically VXVM offers a number of features than the LVM of AIX or HPUX or that of SuSE Linux. I am not too sure if customers make use of them these days. The reason being, I see most of those features are now done at the storage level saving CPU cycles on your production systems.

LVM :-

LVM is probably the most straight forward product in the Volume Manager family.

PV - Physical Volume - Physical Disk - hdiskX (AIX) , cXtXdX (HPUX & SuSE)
VG - Volume Group - Group of PVs - vgXX (AIX) , /dev/vgXX (HPUX & SuSE)
PV/PE - Physical extents - VG is split into extents.
LP/LE - Logical extents - Once a PE/PP is allocated to an LV you call them LP/LE
LV - Logical Volume - A slice of the VG made of PE/PPs - lv01 ( AIX) /dev/vgXX/lvolXX ( HPUX & SuSE )

Now , you could place a filesystem on the LV or you could use it RAW.

At the storage level you could perform some of the most useful functions that any SA would die to have. Like ,

migrate PE/PPs from a disk to another online.
Add a PV onto volume group and extend / reduce an LV online [ with a supporting jfs if you have filesystem on that]


VXVM :-

VM Disk - A Physical Disk
Subdisk - A partition on a disk , something similier to the PEs / PPs
Plex - Atleast made of one subdisk
Volume - Made of a plex , if you have two plexes you are mirrored. ( I could be wrong here )

Now we can format the volume and mount it to use as a filesystem.

I will add more about VxVM Soon ... Catch you later.

Friday, January 12, 2007

NIM Basics

Principle : -

We use NIM or any such network installation managers for ease of administration. Does that not help ? , to install a new software or to re-install operating system, we [ the sysads ] need not run all the way to the datacentre and locate the machine. Then it's CD drive and then replace the CDs etc. So that was the reason why such products came in. It also helps you to do the work more efficient and in a controlled and managed manner.

I am not sure what product solaris uses , but HPUX uses something called IGNITE-UX which does almost the same thing. So most of the enterprise operating systems have this option ot having a NIM kind product.

In principle they make use of tftp, bootp and NFS utilities to do the job. For example, an AIX installation i would guess goes as follows ,

a) Boot from the CD
b) Now this guy is an operating system itself, creates a memfs and installs a basic operating system and then runs the actual OS installation program under the temporary operating system created on the system memory.

So a network installation manager product does the same thing but in a diffrent way.

a) Configure NIM server as a bootp server and is ready to deploy a basic boot file to get the installation started.
b) It transfers a little program(s) via tftp onto the client and then runs that program.
c) Most of the products uses NFS mount to get the media [ We already configured the required resources on the NIM server ]

d) Run the installation program.

I may not be fully correct but that is what I understood having worked on IGNITE and NIM. There may be small vendor specific changes but basically they are the same.

NIM :-

NIM is a client server product. We can define resources on the NIM server and bundle them to get the things done.

The scope of NIM is much more than what I am planning to cover in this note.

I would set up the following on a NIM Server [ A basic one ]

a) NIM Master
b) NIM Clients
c) Spot, mksysb, lpp source and image.data [ one each for each operating system in the nim clients ]

You can do all this by /usr/lpp/bos.sysmgt/nim/methods/nim_master_setup or then smitty eznim.

You can create a spot, mksysb and lpp source for another operating system version by mounting the first CD on the nim and then creating one.

So there you go, you are ready for a NIM operation.

Check the man pages for nim command and you can do almost all the operations from there.

For example, nim -o allocate -a spot=aix52spot -a lpp_source=aix52lppsrc -a mksysb=aix52basicmksysb machine1

Allocated spot resource aix53 spot , lpp_source resource aix52lppsrc and mksysb resource aix52basicmksysb for machine1. All you got to do after this is to boot the client to the SMS , configure network interfaces and then boot through the network.

Phew .... That is all about NIM what I wanted to know as a newbie, the product is not just what I have mentioned , I would leave it to you to do the further R&D.