Search code examples
databasestored-proceduresmuleesbservicebus

Mule ESB data flow design


I need advice on how best to implement the flow in the Mule Studio. I need to make a web service based on the data that is returned by three procedures of database. That is, I need to consistently run three procedures to obtain the necessary data. Each subsequent procedure uses the results of the previous procedure as input parameters. So... Which is better:

  1. On the database side to write a procedure that will run three procedures and eventually generate the desired amount of data for web servive.
  2. From the Studio call three procedures, which eventually form the desired array of data (i have now idea how to store previous procedure result somewhere in mule for running next procedure)?
  3. Which point (1 or 2) will work faster?

Solution

    1. Whether to combine all three procedure or to orchestrate them from Mule is your decision.

    2. The result of each procedure call will automatically become the payload of your Mule flow, so that data will be available to the next procedure. If you want to store the info from each procedure you can use an enricher, which will store the result in a flow variable, session variable or message property of your choice: http://www.mulesoft.org/documentation/display/current/Message+Enricher

    If you want to run each procedure simultaneously and aggregate the three responses, you can look at scatter-gather the http://www.mulesoft.org/documentation/display/current/Scatter-Gather Or you can use google to find some other approaches to fork and join using aggregators also.

    1. Point 1 potentially could be faster as you're opening less connections to the db etc. but it all depends your procedures etc. And you could benefit from having the orchestration in Mule. I would try them both out and see what fits your needs.