Can anyone tell me, what's the difference between SAD and SDD? I'm talking about Sum of Absolute Differences and Sum of Squared Differences.
just in in SSD u get the measure of difference in the square? Instead of raise to the second power I can just use the absolute value of the measure.
What is a square useful for?
Square is often used to strongly discrimine big differences. If you have some quite big error (difference) and you will square it, the outcome will be even bigger. So the optimization method based on squared value will "try" to get rid of the biggest differences (outliers) in the first place.
It is also known that square methods are better in case of gaussian distribution, and absolute methods are better in case of laplacian distribution of noise (disturbance).