Search code examples
activemq-classicproducer-consumerstress-testing

Behavior of ActiveMQ with N producers and 1 consumer


In my architecture I have many producers who want to send messages to an ActiveMQ queue. A consumer will consume these messages from that queue in real time. Though the production of these messages is very fast the queue seems to be able to handle them. No messages are lost.

My purpose here is to stress this architecture, but I cannot find a documentation that explain what kind of problems might happen in this scenario. For example, could message loss happen? If so, when? Can the reception of messages produced by a producer P1 be inhibited by a huge production of messages from another producer P2?

I'm sending persistent JMS messages using this Maven dependency:

<dependency>
   <groupId>org.apache.activemq</groupId>
   <artifactId>activemq-all</artifactId>
   <version>5.15.15</version>
</dependency>

Here's my producer code:

// producer

import org.apache.activemq.ActiveMQConnection;
import org.apache.activemq.ActiveMQConnectionFactory;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.MessageProducer;
import javax.jms.Session;
import javax.jms.TextMessage;


//Producer constructor
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(url);
Connection connection = connectionFactory.createConnection();
connection.start();
session = connection.createSession(false,Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue(jmsQueue); 
producer = session.createProducer(destination);
...

//OnMessage do this
        try {
            stream(message);
        } catch(JMSException e) {
            System.out.println("ERROR: "+e);
        }


private void stream(LogRecord message) throws JMSException {        
    TextMessage toSend =session.createTextMessage(message.getMessage());
    producer.send(toSend);
}

Solution

  • If you're sending persistent messages then no message loss should happen short of a disk failure of some kind on the broker.

    The essential speed of the broker can be maintained if and only if consumption keeps pace with production. Once messages start accumulating on the broker then either the messages fill up the heap and increase pressure on garbage collection or the messages will have to be paged out of memory to disk. In either case, performance will suffer. Keep in mind that ActiveMQ brokers are designed to be a conduit through which messages flow. They are not a storage platform like a database. They can buffer messages for a time, but if messages keep accumulating eventually a tipping point will come when performance degrades.

    For what it's worth, if you're looking for the best performance from an ActiveMQ broker I would recommend taking a look at ActiveMQ Artemis - the next generation broker from ActiveMQ.