I love the push queries (called continuous queries in Apache Flink) of Apache ksqlDB. https://developer.confluent.io/learn-kafka/ksqldb/push-queries-and-pull-queries/ It allows to get notified via HTTP2 of a new result of a query whenever the result set (or the data) changes. That is awesome.
How could we force Apache Pulsar resp. Pulsar SQL to send push-queries? Or is there a similar approach on how to pump query results to a service endpoint (and then further to a client via http2 or websockets).
I don't want to run queries, if there is no data change. Thus, polling is not an option.
Pulsar SQL is not a Stream Processing solution like ksqlDB. To do what you want, you need a Stream Processing Engine with SQL capability. You can look at Flink, Spark, Storm, Samza to name a few. There are connectors to/from Pulsar available for most of them. If you want to use ksqlDB, it should work fine with protocol handlers that make Pulsar "speak" the Kafka protocol such as Starlight-for-Kafka or KoP.
Another possibility if you don't have too much data to process is to use a consumer to get notified of new messages and run a Pulsar SQL query each time you get a new message. That could be implemented by writing a Pulsar Sink. Not very efficient but it could do the work depending on your use-case.