I was reading some forum topics and came across a post with an interesting comment. It suggested to never use pixels in any calculations in your game development. For example, if you were to write code to have your character jump, setting the max height that the character may jump to X pixels would be poor practice. Though I'm not 100% sure as to why this is bad practice, and how should I do this in a proper way. Is it better to set the max height, for example, to a fraction of the screen's height? Or a fraction of the height of the virtual space? Or some kind of algorithm?
I'm asking because I started developing a 2D game with a 1080 x 1920 virtual space (in pixels), and might consider making a another version of the game just to support another aspect ratio.
Because "Pixels" is something that potentially varies between every monitor and/or device your game might be running on!
Because of this, the job of the rendering engine is to construct a scene in virtual space (some way of measuring how near/far individual objects are from each other), and then translate that into actual pixels.
Since you've tagged this "Unity3d", let me speak to that--
Everything in a Unity scene has a position in "units" These have no relation to any of your objects (such as sprites), they are simply the way Unity uses to judge distances. When you add a sprite to the scene, Unity adds a spriterenderer game object, which is (essentially) a texture with a particular set of importing parameters. One of these texture parameters, "Pixels per Unit" does exactly what it sounds like -- determines how many "Units" a given sprite takes up. If you import one sprite at 50 Pixels per Unit, and another at 100 Pixels per Unit, the first sprite will be twice as large as the second one, since Unity will scale it.
If you want to call your virtual space units "pixels", that's fine. The idea is you want to define jumping in terms of virtual space units, rather than system-dependant IO calls.