Given a point cloud with x,y,z coordinates in some arbitrary range (i.e. x: [-40,40], y: [-1,1], z: [-100:100]) what is the most efficient way to transform the coordinates such that they fall within OpenGL's clipping volume range (x: [-1,1], y: [-1,1], z: [-1,1]) and thus can be displayed?
Since you're saying "normalize for display", I assume you don't know the exact ranges upfront.
You'll need to scan the cloud to find min/max values for each axis, then build a transformation matrix that would relocate the cloud, so its "middle" would be positioned at the 0,0,0, and scale it down, so the cloud's longest axis would fit in the clipping volume.
You don't need to transform the points on the cpu side, vertex shader would do that using a matrix you'll pass as a uniform.