As the question asks: Are Newton-Raphson and Newton conjugated gradient the same? I understand them as being so
No, they aren't. In fact, the Newton conjugated gradient method is a modified version of Newton's method (also called Newton-Raphson). It is a conjugate gradient algorithm which approximates the inverse of the local Hessian, while Newton's method works with the Hessian as is. Therefore, a common advantage is that it can converge faster than Newton's method.*
If you know Python, you can see code explaining how to use the method here in SciPy's documentation. Reading the explanations, formulas and code there, you will be able to understand the steps of the method and how it is different from the standard Newton method.
*In ad-hoc cases. For example, computing the inverse of the Hessian matrix can become quite expensive for large matrices.