Building Lustre RPMs for a new kernel
These are very old (version 1.8) directions
When we move to a new kernel on a machine where lustre must also be mounted, new rpms must be built
based upon that kernel.
The procedure is relatively simple for this.
- Install new kernel on a target machine, then boot into it.
- Run /afs/atlas.umich.edu/hardware/Lustre/build_lustre_client_rpms.sh
- Copy /usr/src/redhat/RPMS/x86_64/lustre*.rpm to a source repo, eg
- /afs/atlas.umich.edu/hardware/Lustre/2.6.18-238.5.1.el5
- Use the lustre-client and lustre-client-modules rpms in any new build for a targetted machine
For Lustre 2.7 and later (added Oct 28, 2016)
From
this hpdd wiki page, plus our own mods. This assumes the work will be done on some idled WN
- service rocks-grub stop
- chkconfig rocks-grub off
- service lustre_mount_umt3 stop
- chkconfig lustre_mount_umt3 off
- chkconfig lustre_prep off
- /atlas/data08/ball/admin/unload_lustre.sh
- yum erase lustre-client lustre-client-modules
- yum update kernel kernel-devel kernel-doc kernel-firmware kernel-headers kmod-openafs*
- Yes, make some copy of these needed rpms, somewhere, so they can be used
- In practice it may be necessary to modify the Rocks build and then do a "yum update"
- reboot to the new kernel
- The kernel-devel, make, rpm-build and python-docutils rpms should already be installed, but, if not, install them
- Copy the src rpm to the machine
- Currently at /atlas/data08/ball/admin/LustreSL6/2.7.58/server/lustre-2.7.58-1_gdeb6fcd.src.rpm
- rpmbuild --rebuild --without servers lustre-2.7.58-1_gdeb6fcd.src.rpm
- yum localinstall lustre-client-2.7.58-2.6.32_642.6.2.el6.x86_64_gdeb6fcd.x86_64.rpm lustre-client-modules-2.7.58-2.6.32_642.6.2.el6.x86_64_gdeb6fcd.x86_64.rpm
This was run from /root/rpmbuild on the target system, and the rpms ended up in /root/rpmbuild/RPMS/x86_64
Now, test that Lustre can be mounted and accessed
--
BobBall - 15 Apr 2011