I am using below execution plan to populate my hazelcast backed event table. Problem is, how can I reuse this existing hazelcast backed event table from another execution plan ?
It is a followup question for another similar question.
@Import('users:1.0.0')
define stream users (meta_name string, correlation_id int);
@from(eventtable = 'hazelcast', cluster.name = 'cluster_a', cluster.password = 'pass@cluster_a')
define table UserTable (name string, id int) ;
from users
select meta_name as name, correlation_id as id
insert OVERWRITE UserTable
on UserTable.id == id;
You can use same collection.name
in both execution plans. You don't need to use cluster.name
and cluster.password
. Refer to following example;
Execution Plan 1
@Plan:name('TestIn')
@Import('dataIn:1.0.0')
define stream dataIn (id int, name string);
@from(eventtable = 'hazelcast', collection.name='hzTable')
define table hzTable (id int, name string);
from dataIn
insert into hzTable;
Execution Plan 2
@Plan:name('TestOut')
@Export('dataOut:1.0.0')
define stream dataOut (id int, name string);
@from(eventtable = 'hazelcast', collection.name='hzTable')
define table hzTable (id int, name string);
define trigger periodicTrigger at every 2 sec;
from periodicTrigger join hzTable
select hzTable.id as id, hzTable.name as name
insert into dataOut;