So this is my thread to post my progress and ask questions relevant to the creation of a 3D engine for Star Trek MP in C. For reference, I started off using this tutorial.

I have gotten up to the segment on the camera. Based on the examples and explanations I have encountered thus far, here is my existing gfx_engine.h file, with datatypes .


Code:

#ifndef gfx_engine_h
#define gfx_engine_h

// ENGINE TYPEDEFS
typedef struct {
    long x;
    long y;
    long z;
} rend_point_t;
rend_point_t P_AddVectorToPoint(rend_point_t *point, rend_vector_t *vector);
rend_point_t P_SubVectorFromPoint(rend_point_t *point1, rend_point_t *point2);
rend_vector_t P_SubPointFromPoint(rend_point_t *point1, rend_point_t *point2);
void P_SetPointToPoint(rend_point_t *point, long x, long y, long z);

typedef rend_point_t rend_vector_t;
rend_vector_t V_AddVectorToVector(rend_vector_t *vector1, rend_vector_t *vector2);
rend_vector_t V_SubVectorFromVector(rend_vector_t *vector1, rend_vector_t *vector2);
rend_vector_t V_RotateXY(char angle);
rend_vector_t V_RotateYZ(char angle);
rend_vector_t V_RotateXZ(char angle);
rend_vector_t V_Scale(int s0, int s1, int s2);

typedef struct {
    int minX, maxX;     // drawing window X
    int minY, maxY;     // drawing window Y
    char renderAngle;   // FOV
    int renderdist;     // camera depth
} rend_camera_t;
void CAM_InitTo1stP(rend_camera_t *camera, rend_point_t *point);
void CAM_InitTo3rdP(rend_camera_t *camera, rend_point_t *vector);
void CAM_CullMap(rend_camera_t *camera, MapData_t *map);

#endif


Next I have to figure out the rotation matrixes. In this page, the rotation matrices for XY, XZ, and ZY are shown. I get why the shown formulas are used, but how exactly do you do matrix math in C? Would it involve using a multi-dimensional array? Also, once I set up the formulas for each rotation matrix, how would each matrix correspond to movement on an axis? The tutorial seems to gloss over this (unless I'm blind).
Hi ACagliano,

It depends on what kind of primary surface you need to use.
The littlest surface is the triangle with only 3 points.

You have to put the coordinate in a array (a 9(x,y,z by 3 points)+1 (1 is for color) x nb_triangles)

After you have to determinate what is visible (surface priority etc...) according to a perspective factor wich figure how far the observer is.

After you just have to apply a transformation matrix to transform 3d coordinate in 2d TI screen and draw your surfaces as fast as possible. Dont forget to manage cliping to avoid memory corruption.

the elements of the transformation matrix depends on vision angle xyz and position xyz of the observer. To calculate sin/cos/tan it can be faster to use approximation formulas (order 2)

What compiler do you use ?
My compiler is ZDS. It's the one that comes with the C toolchain by Mateo.
Most of what you've said is above my head at the moment.
From what I understand, thus far, the 3d environment consists of the following:

1. Points (x,y,z).
2. Vectors (dx, dy, dz).
3. Angles (xy, xz, yz) (rotation, pitch, yaw?).

The tutorial I'm using talks about rotation matrices and gives you the matrix setup for what to multiply by, but how is the environment itself (your coords) set up? Like how do you get from your position/angles to a matrix? Then, how do you get from vectors to angles?

Also what I don't understand is what a triangle has to do with this?
I would take a look at GLM and copy matrix math code from there.
ACagliano wrote:
My compiler is ZDS. It's the one that comes with the C toolchain by Mateo.
Most of what you've said is above my head at the moment.
From what I understand, thus far, the 3d environment consists of the following:

1. Points (x,y,z).
2. Vectors (dx, dy, dz).
3. Angles (xy, xz, yz) (rotation, pitch, yaw?).

The tutorial I'm using talks about rotation matrices and gives you the matrix setup for what to multiply by, but how is the environment itself (your coords) set up? Like how do you get from your position/angles to a matrix? Then, how do you get from vectors to angles?

Also what I don't understand is what a triangle has to do with this?


The 3d space is only composed of points with 3 coordinates.
The rotation matrix is only a mathematic operator to rotate the 3d space.
You have also another transformation to project all the 3d point in a 2D plan (TI screen)
The triangle is the minimum surface to draw faces. Because if you have no surface to draw, you will have only point to draw. With a triangle you just have to manage 3 points. If you choose a square you will have to manage 4 points, so it is slower...
Dear Friend wrote:
The 3d space is only composed of points with 3 coordinates.

I know this. I have point and vector typedefs in my rendering engine, for positions and difference between positions, respectively.

Dear Friend wrote:
The rotation matrix is only a mathematic operator to rotate the 3d space.

So if I have x, y, and z coordinates, do I have to multiply all 3 rotation matrices by the coordinate set to get the rotated coordinates? Or is the rotation matrix what I multiply the movement by? I'm confused as to how that ties in.

Dear Friend wrote:
You have also another transformation to project all the 3d point in a 2D plan (TI screen)

That I gathered as much.

Dear Friend wrote:
The triangle is the minimum surface to draw faces. Because if you have no surface to draw, you will have only point to draw. With a triangle you just have to manage 3 points. If you choose a square you will have to manage 4 points, so it is slower...

But wait, if I'm understanding this right, that's what's used to draw things like walls properly, where a wall, say, goes from point A to point B. But in my case, objects only ever occupy one single point in "space". They then have a size (in which case that point is the origin) and sometimes an irregularity seed, which is used to randomly generate the sprite. I have a spriteset that represents each object. Do I need to deal with triangles still for this? I'm thinking I only really need points.
ACagliano wrote:
Dear Friend wrote:
The rotation matrix is only a mathematic operator to rotate the 3d space.

So if I have x, y, and z coordinates, do I have to multiply all 3 rotation matrices by the coordinate set to get the rotated coordinates? Or is the rotation matrix what I multiply the movement by? I'm confused as to how that ties in.

A 3d "transformation" matrix is actually 4 rows by 4 columns, and thus the vectors you multiply by the matrix should also be 4 components, x,y,z,and "w", which should equal "1". The "transformation" matrix can encode rotation, translation, and scaling. When you multiply a matrix by a vector, the transformations encoded into the matrix get applied to the vector. Once again I would recommend looking at GLM for how to do the math.

That being said if you want to store the translation outside the matrix, you could use a 3x3 matrix and a normal {x,y,z} vector. Would be faster that way.
Disclaimer: I am no master at 3d graphics

You multiply your coordinates by the rotation matrix only to rotate them in space around a point (such as the origin), which is why the matrix has sine and cosine in it. All other movement is done by adding/subtracting from the coordinates.

About the triangles, a triangle is always a triangle no matter how much it is rotated on the screen (unlike -for example-a square, which would become a parallelogram when rotated and displayed in 2d space). So, you can have 1 routine for displaying and coloring triangles on the screen. If your sprites that you have designed for things are square, you won't be able to display them properly if they are rotated. That's why 3d models are usually broken down into triangles.

For the 3d to 2d transformation, here's this source I used to render a cube in ICE:
http://anthony.liekens.net/index.php/Computers/RenderingTutorial3DTo2D
That being said, ACag is looking at making a 3D 'point-sprite' engine from what I can tell - this should make things a lot simpler. You wont need to utilise triangles for your project since your graphics are sprites.

I also 2nd c4ooo's notion regarding 3x3 matrix + translation vector.
Tr1p1ea is correct, my existing engine uses a 3D point-sprite engine. Each object utilizes one "point" in the map, and the only things my engine (currently) care about are:

1. The vectors between your current position and the current position of the object to render (those control the X and Y position), and the resulting RELATIVE angles of rotation.
2. The overall distance between your ship and the object, which controls the scale.
3. The id of the object, which controls which sprite we fetch.

For example, my current engine does something like this: {x, y, z, [xy, z]}. It operates in two parts.

/** PRERENDERER **/
All of this is the pre-rendering stage of the engine.
Step 1: Ship {0, 0, 0, [32, 0]}, Enemy Ship {20, 20, 0}
Step 2: Calc Vector: ObjVect: {20 - 0, 20 - 0, 0 - 0} => {20, 20, 0}
Step 3: Calc Vector Mag: D = sqrt(20*20 + 20*20 + 0*0) => sqrt(400 + 400 + 0) => sqrt(800) => 28 (int)
Is Distance Less Than Render Distance (50)? Yes. Proceed.
Step 4: Return XY angle and Z angle, using the vectors. As of present, I use arctan(y, x) for XY and arctan(z, x) for Z. If both of these angles fall within FOV (+ or - 32, or + or - 45 degrees), Proceed.
In the example above, XY is arctan(20, 20), or 45 degrees (32 byte degrees) and Z is arctan(0, 20), or 0 degrees. As both angles are <= 32, they render.
Step 5: Y coordinate is increased inversely proportional to distance.
Step 6: Angles, object ID, and distance are dumped to a buffer, number of items to render is saved for use later.


/** RENDERER **/

Step 7. Buffer is sorted by distance.
Step 8. Object ID is used to fetch proper sprite.
Step 9: XY angle is multiplied by window width, then divided by FOV width to get X coordinate.
Step 10: Z angle is multiplied by window height, then divided by FOV height to get Y coordinate.
Step 11. XY angle is used to apply slight rotations to sprites. Thus, an object traveling towards the outer angles of the viewer will be tilted towards the outside ever so slightly.
Step 12: The ratio of the distance to the render distance is used to inversely determine scale.
Step 13: Sprite is scaled.
Step 14. Sprite is rendered at X and Y coordinate, adjusted for sprite size.

What I'm wondering is, if my engine is this simple point-sprite stuff, how much of the actual true 3D mechanics do I need? Also would it be more advantageous to use full 3D, or would it be better to use this simplified type of engine, since the map will be sparsely populated with objects?
bump.
ACagliano wrote:

What I'm wondering is, if my engine is this simple point-sprite stuff, how much of the actual true 3D mechanics do I need? Also would it be more advantageous to use full 3D, or would it be better to use this simplified type of engine, since the map will be sparsely populated with objects?

I'm no expert, but it seems that a point-sprite system would be easier and faster.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement