Take el6 for example (el7 should have the equivalent rpms)
-bash-4.2$ hostname Sysprov02.aglt2.org -bash-4.2$ pwd /home/packagers/wuwj/umatlas_test/packages/el6/x86_64 -bash-4.2$ ls condor* condor-8.6.13-1.el6.x86_64.rpm condor-classads-8.6.13-1.el6.x86_64.rpm condor-external-libs-8.6.13-1.el6.x86_64.rpm condor-kbdd-8.6.13-1.el6.x86_64.rpm condor-python-8.6.13-1.el6.x86_64.rpm condor-all-8.6.13-1.el6.x86_64.rpm condor-classads-devel-8.6.13-1.el6.x86_64.rpm condor-externals-8.6.13-1.el6.x86_64.rpm condor-procd-8.6.13-1.el6.x86_64.rpmPlease note: Can use \x93rpm -qa|grep condor \x93 to figure out what rpms to download, and update the umatlas repo from sysprov02, Because the osg repo also has condor rpms, to make sure to exclude condor from the osg.repo or osg-el6.repo
-bash-4.2$ more /etc/yum.repos.d/osg.repo [osg] exclude=*condor*
#yum clean all; yum --enablerepo=umatlas_test update condor*
For different types of nodes, the procedure varies
#yum --enablerepo=umatlas_test update condor*
#cf-agent -Kf failsafe.cf;cf-agent -K -b condor_t2
Make sure the work node passes the sanity checks, if so, it will start condor again.
#sh /root/tools/health_check
Update condor via yum
#yum clean all; yum --enablerepo=umatlas_test update condor*
Reconfigure condor
#cf-agent -Kf failsafe.cf;cf-agent -K -b condor_t2;cf-agent -K -b umt3int
Check if condor is running
#systemctl status condor
The update will shut down condor services, but the submitted jobs won\x92t be lost
Update condor via yum
#yum clean all; yum --enablerepo=umatlas_test update condor*
#cf-agent -Kf failsafe.cf;cf-agent -K -b condor_t2
#service condor status