I am using linux HA cluster on SUSE12 SP3 using SLEHA12SP3
I have created a custom CRM resource ( I call it ucaproc) using a ocf resource agent "ocf::heartbeat:anything" . My question however is about the affinity that crm resources have to nodes in linux HA cluster. It seems like that the resource called "failover-ip" that serves virtual IP for cluster always starts on node HA1 but my custom resource (called ucaproc) always runs on node HA2 by default. See output of 'crm status" command that shows cluster reources
crm status
Stack: corosync
Current DC: HA1 (version 1.1.16-6.5.1-77ea74d) -
partition with quorum
Last updated: Thu Aug 8 12:21:33 2019
Last change: Thu Aug 8 10:44:45 2019 by root via cibadmin on HA1
2 nodes configured
2 resources configured
Online: [ HA1 HA2 ]
Full list of resources:
failover-ip (ocf::heartbeat:IPaddr2): Started HA1
ucaproc (ocf::heartbeat:anything): Started HA2
How can I force my custom resource (ucaproc) to run on same node as "failover-ip". Basically I want both the "failover-ip" and "ucaproc" resources to run on same nodes (active node HA1) and when the node fails I want both to failover together to other node (standby node HA2). Here both the nodes are active per-se just that I treat them as active and passive based on resources running
Thanks for help
Yogesh Devi
By default Pacemaker will try to spread out the resources across the cluster nodes. This is most likely why the resources always start on different nodes. To make sure that resources run on the same nodes we can use colocation constraints. For example:
crm(live)configure# colocation failover-ip_with_ucaproc inf: failover-ip ucaproc
This instructs pacemaker to weight failover-ip with a score of infinity (1,000,000) to whichever node is running ucaproc.
For more information on colocating resources you can review the Pacemaker documentation here: https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-resource-colocation.html