Search code examples
unity-game-engine3dshadertexturespaint

Draw onto an object's texture based on a raycast hit position in world space


I am trying to get the world position of a pixel inside a fragmented shader.

Let me explain. I have followed a tutorial for a fragmented shader that let's me paint on objects. Right now it works through texture coordinates but I want it to work through pixel's world position. So when I click on a 3D model to be able to compare the Vector3 position (where the click happended) to the pixel Vector3 position and if the distance is small enough to lerp the color.

This is the setup I have. I created a new 3D project just for making the shader with the intent to export it later into my main project. In the scene I have the default main camera, directional light, a object with a script that shows me the fps and a default 3D cube with a mesh collider. I created a new material and a new Standard Surface Shader and added it to the cube. After that I assigned the next C# script to the cube with the shader and a camera reference.

Update: The problem right now is that the blit doesn't work as expected. If you change the shader script as how Kalle said, remove the blit from the c# script and change the shader from the 3D model material to be the Draw shader, it will work as expected, but without any lighting. For my purposes I had to change distance(_Mouse.xyz, i.worldPos.xyz); to distance(_Mouse.xz, i.worldPos.xz); so it will paint a all the way through the other side. For debugging I created a RenderTexture and every frame I am using Blit to update the texture and see what is going on. The render texture does not hold the right position as the object is colored. The 3D model I have has lot of geometry and as the paint goes to the other side it should be all over the place on the render texture...but right now it is just on line from the top to the bottom of the texture. Also I try to paint on the bottom half of the object and the render texture doesn't show anything. Only when I paint on the top half I can see red lines (the default painting color).

If you want you can download the sample project here.

This is the code I am using.

Draw.shader

Shader "Unlit/Draw"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
        _Coordinate("Coordinate",Vector)=(0,0,0,0)
        _Color("Paint Color",Color)=(1,1,1,1)
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;
            fixed4 _Coordinate,_Color;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                // sample the texture
                fixed4 col = tex2D(_MainTex, i.uv);
                float draw =pow(saturate(1-distance(i.uv,_Coordinate.xy)),100);
                fixed4 drawcol = _Color * (draw * 1);
                return saturate(col + drawcol);
            }
            ENDCG
        }
    }
}

Draw.cs

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class Draw : MonoBehaviour
{
    public Camera cam;
    public Shader paintShader;

    RenderTexture splatMap;
    Material snowMaterial,drawMaterial;

    RaycastHit hit;

    private void Awake()
    {
        Application.targetFrameRate = 200;
    }
    void Start()
    {
        drawMaterial = new Material(paintShader);
        drawMaterial.SetVector("_Color", Color.red);

        snowMaterial = GetComponent<MeshRenderer>().material;
        splatMap = new RenderTexture(1024, 1024, 0, RenderTextureFormat.ARGBFloat);
        snowMaterial.mainTexture = splatMap;
    }
    void Update()
    {
        if (Input.GetMouseButton(0))
        {
            if(Physics.Raycast(cam.ScreenPointToRay(Input.mousePosition),out hit))
            {
                drawMaterial.SetVector("_Coordinate", new Vector4(hit.textureCoord.x, hit.textureCoord.y, 0, 0));
                RenderTexture temp = RenderTexture.GetTemporary(splatMap.width, splatMap.height, 0, RenderTextureFormat.ARGBFloat);
                Graphics.Blit(splatMap, temp);
                Graphics.Blit(temp, splatMap, drawMaterial);
                RenderTexture.ReleaseTemporary(temp);
            }
        }
    }
}

As for what I have tried to solve the problem is this. I searched on google about the problem this thread is about and tried to implement it in my project. I have also found some projects that have the feature I need like this one Mesh Texuture Painting. This one works exactly how I need it, but it doesn't work on iOS. The 3D object is turns to black. You can check out a previous postI made to solve the problem and also talked with the creator on twitter but he can't help me. Also I have tried this asset that works ok but in my main project runs with very little fps, it's hard for me to customize it for my needs and it doesn't paint on the edges of my 3D model.

The shader that works well, is simple enough so I can change it and get the desired effect is the one above.

Thank you!


Solution

  • There are two approaches to this problem - either you pass in the texture coordinate and try to convert it to world space inside the shader, or you pass in a world position and compare it to the fragment world position. The latter is no doubt the easiest.

    So, let's say that you pass in the world position into the shader like so:

    drawMaterial.SetVector("_Coordinate", new Vector4(hit.point.x, hit.point.y, hit.point.z, 0));
    

    Calculating a world position per fragment is expensive, so we do it inside the vertex shader and let the hardware interpolate the value per fragment. Let's add a world position to our v2f struct:

    struct v2f
    {
        float2 uv : TEXCOORD0;
        float4 vertex : SV_POSITION;
        float3 worldPos : TEXCOORD1;
    };
    

    To calculate the world position inside the vertex shader, we can use the built-in matrix unity_ObjectToWorld:

    v2f vert (appdata v)
    {
        v2f o;
        o.vertex = UnityObjectToClipPos(v.vertex);
        o.uv = TRANSFORM_TEX(v.uv, _MainTex);
        o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
        return o;
     }
    

    Finally, we can access the value in the fragment shader like so:

    float draw =pow(saturate(1-distance(i.worldPos,_Coordinate.xyz)),100);
    

    EDIT: I just realized - when you do a blit pass, you are not rendering with your mesh, you are rendering to a quad which covers the whole screen. Because of this, when you calculate the distance to the vertex, you get the distance to the screen corners, which is not right. There is a way to solve this though - you can change the render target to your render texture and draw the mesh using a shader which projects the mesh UVs across the screen.

    It's a bit hard to explain, but basically, the way vertex shaders work is that you take in a vertex which is in local object space and transform it to be relative to the screen in the space -1 to 1 on both axes, where 0 is in the center. This is called Normalized Device Coordinate Space, or NDC space. We can leverage this to make it so that instead of using the model and camera matrices to transform our vertices, we use the UV coordinates, converted from [0,1] space to [-1,1]. At the same time, we can calculate our world position and pass it onto the fragment separately. Here is how the shader would look:

    v2f vert (appdata v)
    {
        v2f o;
    
        float2 uv = v.texcoord.xy;
    
        // https://docs.unity3d.com/Manual/SL-PlatformDifferences.html
    
        if (_ProjectionParams.x < 0) {
            uv.y = 1 - uv.y;
        }
    
        // Convert from 0,1 to -1,1, for the blit
        o.vertex = float4(2 * (uv - 0.5), 0, 1);
    
        // We still need UVs to draw the base texture
        o.uv = TRANSFORM_TEX(v.uv, _MainTex);
    
        // Let's do the calculations in local space instead!
        o.localPos = v.vertex.xyz;
    
        return o;
     }
    

    Also remember to pass in the _Coordinate variable in local space, using transform.InverseTransformPoint.

    Now, we need to use a different approach to render this into the render texture. Basically, we need to render the actual mesh as if we were rendering from a camera - except that this mesh will be drawn as a splayed out UV sheet across the screen. First, we set the active render texture to the texture we want to draw into:

    // Cache the old target so that we can reset it later
    RenderTexture previousRT = RenderTexture.active;
    RenderTexture.active = temp;
    

    (You can read about how render targets work here) Next, we need to bind our material and draw the mesh.

    Material mat = drawMaterial;
    Mesh mesh = yourAwesomeMesh;
    mat.SetTexture("_MainTex", splatMap);
    mat.SetPass(0); // This tells the renderer to use pass 0 from this material
    Graphics.DrawMeshNow(mesh, Vector3.zero, Quaternion.identity);
    

    Finally, blit the texture back to the original:

    // Remember to reset the render target
    RenderTexture.active = previousRT;
    Graphics.Blit(temp, splatMap);
    

    I haven't tested or verified this, but i have used a similar technique to draw a mesh into UVs before. You can read more about DrawMeshNow here.