I am interested in learning about 3D video game development, but am not sure where to start really. Instead of just making it which could be done by various game makers, I am more interested in how it is done. Ideally, I would like to know in which format general 3D models, etc. are stored.(coordinate format etc.) and information on how to represent the 3D data on the screen from a certain perspective such as in general free roaming 3D video games like Devil May Cry. I have seen some links regarding 3D matrices but I really don't understand how they are used. Any help for beginners would be much appreciated.
Thanks
Video game development is a huge field requiring knowledge in game theory, computer science, math, physics and art. Depending on what you want to specialize on, there are different starting points. But as this is a site for programming questions, here some insights on the programming part of it:
Assets (models, textures, sounds) are created using dedicated 3rd party tools (think of Gimp, Photoshop, Blender, 3ds Max, etc), which offer a wide range of different export formats. These formats usually have one thing in common: They are optimized for simple communication between applications.
Video games have high performance requirements and assets have to be loaded and unloaded all the time during gameplay. So the content has to be in a format that is compact and loads fast. Often 3rd party formats do not meet the specific requirements you have in your game project. For optimal performance you would want to consider developing your own format.
Examples of assets and common 3rd party formats:
In my game project I use an importer that converts my textures from one of the aforementioned formats to DDS files. This is not a format I developed myself, still it is one of the fastest available for loading with Direct3D (Graphics API).
The Wavefront OBJ file format is a very simple to understand, text-based format. Most 3D modelling applications support it. But since it is text based the files are much larger than equivalent binary files. Also they require lots of expensive parsing and processing. So I developed an importer that coverts OBJ models to my custom high performance binary format.
WAV is a very common sound file format. Additionally it is quite ideal for using it in a game. So no custom format is necessary in this case.
Rendering a 3D scene at least 30 times per second to an average screen resolution requires quite a lot calculations. For this purpose GPUs were built. While it is possible to write any kind of program for the GPU using very low level languages, most developers use an abstraction like Direct3D or OpenGL. These APIs, while restricting the way of communicating with the GPU, greatly simplify graphics related tasks.
I have only worked with Direct3D so far, but some of this should apply to OpenGL as well.
As I said, the GPU can be programmed. Direct3D and OpenGL both come with their own GPU programming language, a.k.a. Shading Language: HLSL (Direct3D) and GLSL. A program written in one of those languages is called a Shader.
Before rendering a 3D model the graphics device has to be prepared for rendering. This is done by binding the shaders and other effect states to the device. (All of this is done using the API.)
A 3D model is usually represented as a set of vertices. For example, 4 vertices for a rectangle, 8 for a cube, etc. These vertices consist of multiple components. The absolute minimum in this cases would be a position component (3 floating point numbers representing the X, Y and Z offsets in 3D space). Also, a position is just an infinitely small point. So additionally we need to define how the points are connected to a surface.
When the vertices and triangles are defined they can be written to the memory of the GPU. If everything is correctly set, we can issue a Draw Call through the API. The GPU then executes your shaders an processes all the input data. In the last step the rendered triangles are written to the defined output (the screen, for example).
As I said before, a 3D mesh consists of vertices with a position in 3D space. This positions are all embedded in a coordinate system called object space.
In order to place the object in the world, move, rotate or scale it, these positions have to be transformed. In other words, they have to be embedded in another coordinate system, which in this case would be called world space.
The simplest and most efficient way to do this transformation is matrix multiplication: From the translation, rotation and scaling amounts a 4x4 matrix is constructed. This matrix is then multiplied with each and every vertex. (The math behind it is quite interesting, but not in the scope of this question.)
Besides object and world space there is also the view space (coordinate system of the 'camera'), clip space, screen space and tangent space (on the surface of an object). Vectors have to be transformed between those coordinate systems quite a lot. So you see, why matrices are so important in 3D graphics.
Find a topic that you think is interesting and start googling. I think I gave you quite a few keywords and I hope I gave you some idea of the topics you mentioned specifically.
There is also a Game Development Site in the StackExchange framework which might be better suited for this kind of questions. The top voted questions are always a good read on any SE site.