How, ultimately, does papervision3d (the popular 3D rendering package for Flash) draw transformed textures onto the screen?
Is it internally using any of Flash's rendering apparatus - i.e. by drawing textures into DisplayObjects and transforming them, or with 3D MovieClips for example? Or perhaps filters? Or is it ultimately just reading pixels out of the textures and painting them into the output, as you would in any software platform?
I ask because the straightforward answer would be the latter one, but after a little testing it seems that getPixel
and setPixel
simply aren't fast enough for this kind of approach, so it seems like there must be something more arcane going on.
Thanks for any info!
Edit - my summary of the answer: Papervision does not do perspective transforms, per se. It does only scale and skew transforms on each triangle of texture, and the illusion of perspective arises if you use enough triangles. The affine transformations make use of Flash's rendering apparatus, so that's how costly pixel operations are avoided.
It's all rendered using the drawing API. (lineTo) With the drawing API you can set a bitmapFill ratehr than a colour, which allows you to draw textures. Internally they are converting every asset you pass in as a texture into a BitmapData object to use when rendering. Then for ever subdivision (triangle) that your 3D object has, they prefrom transformations on it in order to get the right perspectives.
This approach is still very processor intensive, but faster then setting each pixel. It works very similiar to other 3D rendering softwears, using the smae techniques and theories, just built specifically for ActionScript.