In my usecase, I need to compute the mean for a set of values that follow a non standard distribution. Initially, I receive n data samples which can be used to create the non standard probability distribution. And then I receive m data samples for which I need to compute the mean, based on the non standard distribution. The samples are all complex numbers.
To create this non standard distribution, I divide the iq plane (the data is complex numbers) into square blocks and use it to create the non standard distribution.
Is there a mechanism to identify the appropriate size of these blocks used in creating the non standard distribution ? As, there seems to be a correlation between the size of these blocks and the accuracy of the computed probability distribution.
Also, what is the proper terminology for this method, as for now its mostly based on intuition and not proper probability theory. I would like to read more about creating such non standard distributions.
Simple random sampling strategy (SRS) or systematic sampling. Cf. Cluster Sampling.