As you may know, I am working on a 3D graphing utility for the Casio Prizm graphing calculator. I have it working nicely, short of a few features. Notably, I've been using a camera rotation system whereby pressing one set of arrows adds/subtracts to/from the absolute theta_y, and pressing the other set adds/subtracts to/from the absolute theta_z. For correct rotation, the change in [theta_x, theta_y, theta_z] needs to depend on the current [theta_x, theta_y, theta_z], and I am having a hard time deriving the correct math for it. First, a bit of background:

:: I am defining the rotation for the graph with three angles, theta_x, theta_y, theta_z (or XYZ), applied in order of X, Y, Z. The transform I'm using is called the camera transform (using the camera transform). It's a left-handed transform, with, as I said, an angle order of X, Y, then Z. My code to calculate the camera position looks like this (remember, I want to keep it pointing at the center of the graph, which is usually the three-dimensional origin (0,0,0).):

Code:
   costx = mcosf(theta_x); sintx = msinf(theta_x);      // 1 0
   costy = mcosf(theta_y); sinty = msinf(theta_y);      // 1 0
   costz = mcosf(theta_z); sintz = msinf(theta_z);      //-1 0

   float alpha = 0.f, beta = 0.f, gamma = cam_radius;

   float alpha_prime =  alpha*costy*costz                     + beta*(costy*sintz)                   + gamma*(sinty);
   float beta_prime  = -alpha*(costx*sintz+costz*sintx*sinty) + beta*(costx*costz-sintx*sinty*sintz) + gamma*costy*sintx;
   float gamma_prime =  alpha*(sintx*sintz-costx*costz*sinty) - beta*(costz*sintx+costx*sinty*sintz) + gamma*costx*costy;

   cx = -alpha_prime;
   cy = -beta_prime;
   cz = -gamma_prime;

   // adjust for grid center (ex,ey)
   cx += ex;
   cy += ey;


Next, the math that I use to apply the camera transform to 3D points to be drawn to account for the camera position and heading. (dx, dy, dz) are the remapped coordinates in 3-space:

Code:
   float dx =  (val_x-cx)*costy*costz + (val_y-cy)*(costy*sintz) + (val_z-cz)*(sinty);
   float dy = -(val_x-cx)*(costx*sintz+costz*sintx*sinty) + (val_y-cy)*(costx*costz-sintx*sinty*sintz) + (val_z-cz)*costy*sintx;
   float dz =  (val_x-cx)*(sintx*sintz-costx*costz*sinty) - (val_y-cy)*(costz*sintx+costx*sinty*sintz) + (val_z-cz)*costx*costy;


However, adjusting the camera rotation is proving to be a real problem. I've considered several different approaches:

  1. Simply add/subtract pi/8 to/from theta_y/theta_z. This does rotation, but not intuitively.
  2. Use the camera transform to transform delta theta_y or delta theta_z values into delta_x/y/z angles, and add those to theta_x/y/z. This appears not to work.
  3. Use the camera cx/cy/cz coordinates, and apply a camera transform of +/- pi/8 in either theta_y or theta_z. Then, use the resulting new coordinates for the camera to work backwards and get the XYZ angles for the vector that joins the origin to that point.

The code for the last approach is here. It sadly does not work. Any assistance would be greatly appreciated. matan2f is a custom atan2() implementation.

Code:
   //alpha, beta, gamma are the delta rotation angles
   float c1 = mcosf(alpha), c2 = mcosf(beta), c3 = mcosf(gamma);
   float s1 = msinf(alpha), s2 = msinf(beta), s3 = msinf(gamma);

   //[cxo cyo czo] are the camera coords before inversion/re-centering
   float alpha_prime =  cxo*c2*c3            + cyo*(c2*s3)          + czo*(s2);
   float beta_prime  = -cxo*(c1*s3+c3*s1*s2) + cyo*(c1*c3-s1*s2*s3) + czo*c2*s1;
   float gamma_prime =  cxo*(s1*s3-c1*c3*s2) - cyo*(c3*s1+c1*s2*s3) + czo*c1*c2;

   theta_x = matan2f( beta_prime, gamma_prime );
   if (gamma_prime >= 0) {
      theta_y = matan2f( alpha_prime * mcosf(theta_x), gamma_prime );
   } else {
      theta_y = matan2f( alpha_prime * mcosf(theta_x), -gamma_prime );
   }
   theta_z = matan2f( mcosf(theta_x), msinf(theta_x) * msinf(theta_y) );

I should mention that I found the algorithm at the end of that here:
http://stackoverflow.com/questions/1251828/calculate-rotations-to-look-at-a-3d-point. Again, thanks in advance for any help, large or small.
... Matrix math, anyone?

(still trying to read your post Razz )
AHelper wrote:
... Matrix math, anyone?

(still trying to read your post Razz )
Yes, it does indeed apply a fair deal of matrix math. The Wikipedia page I linked shows the three rotation matrices that make up the camera transform that I'm using.
Did I point you at ZPR already and did you already explain why it wouldn't work? If not, you should look here: http://www.nigels.com/glt/gltzpr/
elfprince13 wrote:
Did I point you at ZPR already and did you already explain why it wouldn't work? If not, you should look here: http://www.nigels.com/glt/gltzpr/
I'm reading through that source now, and it looks to me as if it rotates through an angle in a given direction. I suppose that actually is what I want, but the tricky thing is that it appears to perform rotation/scaling of all point/polygons/whatevers right there. With my system, I maintain the originals, and rotate them for the current camera position and heading each time I re-draw the frame (which incidentally is when the camera moves).
I mentioned to Kerm on IRC that using Euler angles to represent combined rotations can be a little unintuitive and tricky to work with - quaternions tend to be favoured for representing rotations.
KermMartian wrote:
I'm reading through that source now, and it looks to me as if it rotates through an angle in a given direction. I suppose that actually is what I want, but the tricky thing is that it appears to perform rotation/scaling of all point/polygons/whatevers right there. With my system, I maintain the originals, and rotate them for the current camera position and heading each time I re-draw the frame (which incidentally is when the camera moves).


I'm pretty sure it doesn't rescale the input data, since I rewrote the core of the library in Python for use with PyOpenGL, and I don't remember it doing that.
benryves wrote:
I mentioned to Kerm on IRC that using Euler angles to represent combined rotations can be a little unintuitive and tricky to work with - quaternions tend to be favoured for representing rotations.
Since benryves is widely considered to be extremely awesome, and I'm frustrated with Euler angles, I'm going to try playing with quaternions. Firstly, I wrote myself a little tiny 3-vector / 4-quaternion library that does things like multiply quaternions, normalize vectors and quaternions, perform quaternion rotations given an axis vector and a rotation angle, and a few other miscellaneous things. I implemented my quaternion math based on this tutorial:

http://www.gamedev.net/page/resources/_/technical/math-and-physics/a-simple-quaternion-based-camera-r1997

Next, before I start worrying about rotating my camera, I need to do simple scene rendering with a camera transform. I have defined my camera as follows:

Code:
   struct camstruct {
      struct vector position;
      struct vector view;
      struct vector up;
   } camera;
where struct vector is basically just three floats, x, y, and z. It seems to me that gluLookAt() pretty much does the camera transform that I want, so I read this page:

http://pyopengl.sourceforge.net/documentation/manual/gluLookAt.3G.html

I turned the math that they describe into this code:

Code:
                  //first make them pseudo-perspective via a camera transform
                  struct vector F, f, upprime, s, u;
                  F.x = camera.view.x - camera.position.x;
                  F.y = camera.view.y - camera.position.y;
                  F.z = camera.view.z - camera.position.z;
                  vec_copy(&f,F);
                  vec_norm(&f);
                  
                  vec_copy(&upprime, camera.up);
                  vec_norm(&upprime);
                  
                  vec_cross(&s,f,upprime);
                  vec_cross(&u,s,f);
                  
                  float M[4][4] = {{ s.x,  s.y,  s.z, 0},
                                   { u.x,  u.y,  u.z, 0},
                               {-f.x, -f.y, -f.z, 0},
                               {   0,    0,    0, 1}};
The linked gluLookAt page says:
Quote:
and gluLookAt is equivalent to glMultMatrixf(M); glTranslated (-eyex, -eyey, -eyez);
The translation is fine, and what I'd expect, but the glMultMatrixf is confusing me. M is 4x4, but all of my points to be transformed are 3x1 vectors. Help, and thanks in advance.

Edit: I missed that glMatrixMultf actually does C = C*M, where C is some internal "current [transform] matrix" for OpenGL, which makes sense, but in my case it doesn't make a difference. This is the only transform, so it's just C=I*M=M anyway. I'm still trying to work out if the point vectors v[] are quaternions, if [x y z 0] is a reasonable translation, or if there's something else I need to do.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement