Search code examples
jboss7.xhornetq

JBoss EAP 6.4 clustering in HornetQ although they are in different box


I have 2 different boxes in my configuration. And each box contains 2 instances of JBoss EAP 6.4 . I am using stanadalone-full-ha.xml configuration for all instances.

I start 2 instances in 2 different box in following way:

standalone.sh -c standalone-full-ha.xml -b ipOfOneMachineInFirstBox -u 230.0.0.4 -Djboss.node.name=node1

standalone.sh -c standalone-full-ha.xml -b ipOfOneMachineInSecondBox -u 230.0.0.5  -Djboss.node.name=node1

When each JBoss is started I can see messages in the log that they connected wit a cluster.

13:28:21,715 INFO  [org.hornetq.core.server] (Thread-28 (HornetQ-server-HornetQServerImpl::serverUUID=c08d2d4d-1444-11e9-8c21-839549bb27e4-1283024282)) HQ221027: Bridge ClusterConnectionBridge@7d2f5397 [name=sf.my-cluster.f79947a0-1444-11e9-b0dd-e959e93529ee, queue=QueueImpl[name=sf.my-cluster.f79947a0-1444-11e9-b0dd-e959e93529ee, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=c08d2d4d-1444-11e9-8c21-839549bb27e4]]@557c5a54 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@7d2f5397 [name=sf.my-cluster.f79947a0-1444-11e9-b0dd-e959e93529ee, queue=QueueImpl[name=sf.my-cluster.f79947a0-1444-11e9-b0dd-e959e93529ee, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=c08d2d4d-1444-11e9-8c21-839549bb27e4]]@557c5a54 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=netty, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=5445&host=**ipOfOnMachineInSecondBox**], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@837873239[nodeUUID=c08d2d4d-1444-11e9-8c21-839549bb27e4, connector=TransportConfiguration(name=netty, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=5445&host=**ipOfOneMachineInFirstBox**, address=jms, server=HornetQServerImpl::serverUUID=c08d2d4d-1444-11e9-8c21-839549bb27e4])) [initialConnectors=[TransportConfiguration

I think I need to change something in standalone-full-ha. They both use same configuration. Is just changing name is ok?

<subsystem xmlns="urn:jboss:domain:messaging:1.4">
        <hornetq-server>
            <clustered>true</clustered>
            <persistence-enabled>true</persistence-enabled>
            <security-enabled>false</security-enabled>
            <cluster-password>${jboss.messaging.cluster.password:CHANGE ME!!}</cluster-password>
            <backup>false</backup>
            <journal-type>NIO</journal-type>
            <journal-min-files>2</journal-min-files>

            <connectors>
                <netty-connector name="netty" socket-binding="messaging"/>
                <netty-connector name="netty-throughput" socket-binding="messaging-throughput">
                    <param key="batch-delay" value="50"/>
                </netty-connector>
                <in-vm-connector name="in-vm" server-id="0"/>
            </connectors>

            <acceptors>
                <netty-acceptor name="netty" socket-binding="messaging"/>
                <netty-acceptor name="netty-throughput" socket-binding="messaging-throughput">
                    <param key="batch-delay" value="50"/>
                    <param key="direct-deliver" value="false"/>
                </netty-acceptor>
                <in-vm-acceptor name="in-vm" server-id="0"/>
            </acceptors>

            <broadcast-groups>
                <broadcast-group name="bg-group1">
                    <socket-binding>messaging-group</socket-binding>
                    <broadcast-period>5000</broadcast-period>
                    <connector-ref>
                        netty
                    </connector-ref>
                </broadcast-group>
            </broadcast-groups>

            <discovery-groups>
                <discovery-group name="dg-group1">
                    <socket-binding>messaging-group</socket-binding>
                    <refresh-timeout>10000</refresh-timeout>
                </discovery-group>
            </discovery-groups>

            <cluster-connections>
                <cluster-connection name="my-cluster">
                    <address>jms</address>
                    <connector-ref>netty</connector-ref>
                    <discovery-group-ref discovery-group-name="dg-group1"/>
                </cluster-connection>
            </cluster-connections>

            <security-settings>
                <security-setting match="#">
                    <permission type="send" roles="guest"/>
                    <permission type="consume" roles="guest"/>
                    <permission type="createNonDurableQueue" roles="guest"/>
                    <permission type="deleteNonDurableQueue" roles="guest"/>
                </security-setting>
            </security-settings>

            <address-settings>
                <address-setting match="#">
                    <dead-letter-address>jms.queue.DLQ</dead-letter-address>
                    <expiry-address>jms.queue.ExpiryQueue</expiry-address>
                    <redelivery-delay>0</redelivery-delay>
                    <max-size-bytes>10485760</max-size-bytes>
                    <page-size-bytes>2097152</page-size-bytes>
                    <address-full-policy>PAGE</address-full-policy>
                    <message-counter-history-day-limit>10</message-counter-history-day-limit>
                    <redistribution-delay>1000</redistribution-delay>
                </address-setting>
            </address-settings>

            <jms-connection-factories>
                <connection-factory name="InVmConnectionFactory">
                    <connectors>
                        <connector-ref connector-name="in-vm"/>
                    </connectors>
                    <entries>
                        <entry name="java:/ConnectionFactory"/>
                    </entries>
                </connection-factory>
                <connection-factory name="RemoteConnectionFactory">
                    <connectors>
                        <connector-ref connector-name="netty"/>
                    </connectors>
                    <entries>
                        <entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>
                    </entries>
                    <ha>true</ha>
                    <block-on-acknowledge>true</block-on-acknowledge>
                    <retry-interval>1000</retry-interval>
                    <retry-interval-multiplier>1.0</retry-interval-multiplier>
                    <reconnect-attempts>-1</reconnect-attempts>
                </connection-factory>
                <pooled-connection-factory name="hornetq-ra">
                    <transaction mode="xa"/>
                    <connectors>
                        <connector-ref connector-name="in-vm"/>
                    </connectors>
                    <entries>
                        <entry name="java:/JmsXA"/>
                    </entries>
                </pooled-connection-factory>
            </jms-connection-factories>

            <jms-destinations>
                <jms-queue name="ExpiryQueue">
                    <entry name="java:/jms/queue/ExpiryQueue"/>
                </jms-queue>
                <jms-queue name="DLQ">
                    <entry name="java:/jms/queue/DLQ"/>
                </jms-queue>
                <jms-queue name="SiGuardServerQueue">
                    <entry name="java:jboss/exported/queue/siguard/serverQueue"/>
                </jms-queue>
                <jms-topic name="SiGuardClientTopic">
                    <entry name="java:jboss/exported/topic/siguard/clientTopic"/>
                </jms-topic>
                <jms-topic name="SiGuardNodeTopic">
                    <entry name="java:jboss/exported/topic/siguard/nodeTopic"/>
                </jms-topic>
            </jms-destinations>
        </hornetq-server>
    </subsystem>

Solution

  • By default each HornetQ instance using standalone-full-ha.xml will use UDP multicast to broadcast information about itself and also listen for information about other potential cluster members out on the network. In general, nodes which use the same multicast IP address & port will find each other and form a cluster.

    If you don't want nodes to cluster together at all then you shouldn't configure clustering.

    If you want to have multiple independent clusters of nodes on the same network then each cluster must use unique multicast addresses by specifying the following on the command-line:

    • -u: This controls the jboss.default.multicast.address system property in the server configuration which is used in the socket-binding-group. By default it is 230.0.0.4.
    • -Djboss.messaging.group.address: This controls the multicast address used by the messaging-group socket-binding used for HornetQ clustering. By default it is 231.7.7.7.

    In your case, I'd recommend starting your JBoss instances with something like this:

    standalone.sh -c standalone-full-ha.xml -b ipOfOneMachineInFirstBox -u 230.0.0.4 -Djboss.messaging.group.address=231.7.7.7 -Djboss.node.name=cluster1-node1
    
    standalone.sh -c standalone-full-ha.xml -b ipOfOneMachineInSecondBox -u 230.0.0.5  -Djboss.messaging.group.address=231.7.7.8 -Djboss.node.name=cluster2-node1