Search code examples
spring-bootibm-mqspring-jmsjmstemplate

MQRC_UNKNOWN_ALIAS_BASE_Q when connecting with IBM MQ cluster using CCDT and Spring Boot JMSTemplate


I have a Spring Boot app using JMSListener + IBMConnectionFactory + CCDT for connecting an IBM MQ Cluster.

A set the following connection properties: - url pointing to a generated ccdt file - username (password not required, since test environment) - queuemanager name is NOT defined - since it's the cluster's task to decide, and a few google results, including several stackoverflow ones indicate that in my case qmgr must be set to empty string.

When my Spring Boot JMSListener tries to connect to the queue, the following MQRC_UNKNOWN_ALIAS_BASE_Q error occurs:

2019-01-29 11:05:00.329 WARN  [thread:DefaultMessageListenerContainer-44][class:org.springframework.jms.listener.DefaultMessageListenerContainer:892] - Setup of JMS message listener invoker failed for destination 'MY.Q.ALIAS' - trying to recover. Cause: JMSWMQ2008: Failed to open MQ queue 'MY.Q.ALIAS'.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2082' ('MQRC_UNKNOWN_ALIAS_BASE_Q').
com.ibm.msg.client.jms.DetailedInvalidDestinationException: JMSWMQ2008: Failed to open MQ queue 'MY.Q.ALIAS'.
        at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:513)
        at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215)

In the MQ error log I see the following:

01/29/2019 03:08:05 PM - Process(27185.478) User(mqm) Program(amqrmppa)
                    Host(myhost) Installation(Installation1)
                    VRMF(9.0.0.5) QMgr(MyQMGR)

AMQ9999: Channel 'MyCHL' to host 'MyIP' ended abnormally.

EXPLANATION:
The channel program running under process ID 27185 for channel 'MyCHL'
ended abnormally. The host name is 'MyIP'; in some cases the host name
cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 938 --------------------------------------------------------
01/29/2019 03:15:14 PM - Process(27185.498) User(mqm) Program(amqrmppa)
                    Host(myhost) Installation(Installation1)
                    VRMF(9.0.0.5) QMgr(MyQMGR)

AMQ9209: Connection to host 'MyIP' for channel 'MyCHL' closed.

EXPLANATION:
An error occurred receiving data from 'MyIP' over TCP/IP.  The connection
to the remote host has unexpectedly terminated.

The channel name is 'MyCHL'; in some cases it cannot be determined and so
is shown as '????'.
ACTION:
Tell the systems administrator.

Since the MQ error log contains QMgr(MyQMGR), which MyQMGR value I did not set in the connection properties, I assume the routing seems to be fine: the MQ Cluster figured out a qmgr to use.

The alias exists and points to an existing q. Bot the target q and the alias are added to the cluster via the CLUSTER(clustname) command.

What can be wrong?


Solution

  • Short Answer

    • MQ Clustering is not used for a consumer application to find a queue to GET messages from.
    • MQ Clustering is used when a producer application PUTs messages to direct them to a destination.

    Further reading

    Clustering is used when messages are being sent to help provide load balancing to multiple instances of a clustered queue. In some cases people use this for hot/cold failover by having two instances of a queue and keeping only one PUT(ENABLED).

    If an application is a producer that is putting messages to a clustered queue, it only needs to be connected to a queue manager in the cluster and have permissions to put to that clustered queue. MQ based on a number of different things will handle where to send that message.


    Prior to v7.1 there was only two ways to provide access to remote clustered queues:

    1. Using a QALIAS:

      • Define a local QALIAS which has a TARGET set to the clustered queue name

        Note this QALIAS does not itself need to be clustered.

      • Grant permission to put to the local QALIAS.
    2. Provide permissions to PUT to the SYSTEM.CLUSTER.TRANSMIT.QUEUE.

    The first option allows for granting granular access to an application for specific clustered queues in the cluster. The second option allows for the application to put to any clustered queue in the cluster or any queue on any clustered queue manager in the cluster.

    At 7.1 IBM added a new optional behavior, this was provided with the setting ClusterQueueAccessControl=RQMName in the Security stanza of the qm.ini. If this is enabled (it is not the default), then you can actually provide permission for the app to PUT to the remote clustered queues directly without the need for a local QALIAS.


    What clustering is not for is consuming applications such as your example of a JMSListener.

    An application that will consume from any QLOCAL (clustered or not) must be connected to the queue manager where the QLOCAL is defined.

    If you have a situation where there are multiple instances of a clustered QLOCAL that are PUT(ENABLED), you would need to ensure you have consumers connected directly to each queue managers that an instance is hosted on.


    Based on your comment you have a CCDT with an entry such as:

    CHANNEL('MyCHL') CHLTYPE(CLNTCONN) QMNAME('MyQMGR') CONNAME('node1url(port1),node2url(port2)')
    

    If there are two different queue managers with different queue manager names listening on node1url(port1) and node2url(port2), then you have different ways to accomplish this from the app side.

    When you specify the QMNAME to connect to the app will expect the name to match the queue manager you connect to unless it meets one of the following:

    1. If you specify *MyQMGR it will find the channel or channels with QMNAME('MyQMGR') and pick one and connect and will not enforce that the remote queue manager name must match.
    2. If in your CCDT you have QNAME(''), it is set to NULL, then in your app you can specify a empty queue manager name or only a space and it will find this entry in the CCDT and will not enforce that the remote queue manager name must match.
    3. In your app you specify the queue manager name as *, MQ will use any channel in the CCDT and will not enforce that the remote queue manager name must match.

    One limitation of CCDT is that channel name must be unique in the CCDT. Even if the QMNAME is different you can't have a second entry with the same channel name.

    When you connect you are hitting the entry with two CONNAME's and getting connected to the first IP(port), you would only get to the second IP(port) if at connect time the first is not available, MQ will try the second, or if you are connected and have RECONNECT enabled and then the first goes down MQ will try to connect to the first then second.

    If you want to have both clustered queue PUT(ENABLED) to receive traffic then you want to be able to specifically connect to each of the two queue managers to read those queues.

    I would suggest you add a new channel on each queue manager that has a different QM specific name that is also different from the existing name, something like this:

    CHANNEL('MyCHL1') CHLTYPE(CLNTCONN) QMNAME('MyQMGR1') CONNAME('node1url(port1)')
    CHANNEL('MyCHL2') CHLTYPE(CLNTCONN) QMNAME('MyQMGR2') CONNAME('node2url(port2)')
    

    This would be in addition to the existing entry.

    For your putting components you can continue to use the channel that can connect to either queue manager.

    For your getting components you can configure at least two of them, one to connect to each queue manager using the new queue manager specific CCDT entries, this way both queues are being consumed.