As I understand it, games usually have an internal state for collision detection commonly known as "hit boxex" which overlap, but slightly differ from 3D geometry (rendered on screen). If my understanding is correct, even if a 3D object geometry visually collides with another, a game engine might not "register" the collision and the two object will pass trough each other without colliding. It is also my understanding that ray tracing cores on a GPU will perform (hardware accelerated) light ray collisions with 3D object geometry in order to compute color values (reflections, light bouncing, etc...); what these computations seem to be doing looks suspiciously like collision detection between light ray and object geometry.
My question is: Ignoring computational speed or feasibility, would it be possible for a game engine to solely rely on the RT Cores on a GPU to compute collision detection without having a separate data structure (hit boxed and collision boundaries) ?
The end result would be GPU hardware accelerated collision detection that is based on actual geometry (rendered on screen) including tessellation data and other geometry added by the GPU trough shaders or other methods.
P.S.: I am not a game developer, please correct me if my assumptions are wrong.
I'm not sure this question is really suited for Stack Overflow. It's very broad.
But as a summary answer: it seems like it's possible to use RT cores for other types of hit detection, but there are a few things to consider: