okey dokey.

added two methods for Unknown. This is probably unsafe (information destroying, not just unable to solve) for nonlinear or multivariable equations, but should fit the purpose at hand okay.


Code:

   def collect_additive_terms(self,name):
      if self.reciprocal:
         raise ValueError("Can't collect additive terms of inverted Unknown: %s" % repr(self))
      else:
         usterm = type(self.sterm) == Unknown
         sub_terms = self.sterm.collect_additive_terms(name) if usterm else (0,0)
         return ((self.mterm if name==self.name else 0)+sub_terms[0],(0 if usterm else self.sterm)+sub_terms[1])
      
   # not sure if this is safe with other variables...
   def collapse_on(self, name):
      m,s = self.collect_additive_terms(name)
      return Unknown(name=name,mterm=m,sterm=s)
   


then


Code:
lam12 = inv(T) * (Pplanar[:2]-planar[:2,2])
lambda1 = lam12[0,0]
lambda2 = lam12[1,0]
lambda3 = 1 - lambda1 - lambda2
rval = [lambda1,lambda2,lambda3,x,y,lambda1*trimatrix[2,0]+lambda2*trimatrix[2,1]+lambda3*trimatrix[2,2]]

rval = array([lambda1,lambda2,lambda3,x,y,z])
rval[3+c] = lambda1*trimatrix[2,0]+lambda2*trimatrix[2,1]+lambda3*trimatrix[2,2]

# feel free to put better guesses if you have them
# but it's linear so I think it should be okay?
solved = (rval[3+c] - P[2,0]).collapse_on('z')
real_z = -solved.sterm / solved.mterm
if real_z < epsilon:
    unsolved_indices = array([1,2,3,3+c])
    rval[unsolved_indices] = rval[unsolved_indices](z=real_z)
else:
    print "Found no solution to equation %s=0" % (str(rval[3+c] - P[2,0]))
    return [0., 0., 0., x, y, z]

return rval
Sadly, I'm hoping to avoid scipy v0.11.x if possible. Smile I hope that doesn't make things too challenging, as I really appreciate your algorithmic help on this. I tried doing some tricks with flipping around axis choices when I detect a bad choice, but that didn't help.

Edit: I will try this new method and report back.

Edit #2: Here are my first results with epsilon == 1e-4:


Code:
Found no solution to equation ((92.6517381562*z)+((358.561845721*z)+((7.57092809613e-15*z)+((-450.213583877*z)+((-92.4463991738*z)+((-357.767184703*z)+((-7.57092809613e-15*z)+((450.213583877*z)+((-1*z)+-3.63797880709e-12)))))))))=0
Found no solution to equation ((92.6517381562*z)+((358.561845721*z)+((7.57092809613e-15*z)+((-450.213583877*z)+((-92.4463991738*z)+((-357.767184703*z)+((-7.57092809613e-15*z)+((450.213583877*z)+((-1*z)+-3.63797880709e-12)))))))))=0
Traceback (most recent call last):
  File "./kmz2mc.py", line 501, in <module>
    main()
  File "./kmz2mc.py", line 467, in main
    t2v.geom_tri2voxel(tri)
  File "./kmz2mc.py", line 220, in geom_tri2voxel
    x1, y1, z1, filled = self.geom_bary(tv, x0, y0, z0)
  File "./kmz2mc.py", line 352, in geom_bary
    [l1, l2, l3, x, y, z] = self.geom_cart2bary(tri, x, y, z)
  File "./kmz2mc.py", line 306, in geom_cart2bary
    rval[unsolved_indices] = rval[unsolved_indices](z=real_z)
TypeError: only integer arrays with one element can be converted to an index
look at the new code. it's scipy free.

[edit]
patch to make collect_additive_terms free "safe", even if it still can't solve everything.


Code:
   def collect_additive_terms(self,name):
      if self.reciprocal:
         raise ValueError("Can't collect additive terms of inverted Unknown: %s" % repr(self))
      else:
         usterm = type(self.sterm) == Unknown
         sub_terms = self.sterm.collect_additive_terms(name) if usterm else (0,0)
         return ((self.mterm if name==self.name else 0)+sub_terms[0],(0 if usterm else self.sterm)+sub_terms[1]+(Unknown(name=self.name,mterm=self.mterm) if name!=self.name else 0))


New behavior (good):

Code:
>>> mc2.collapse_on('y')
((0*Unknown('y'))+((-1*Unknown('z'))+((-2.8149282041245853*Unknown('z'))+((0*Unknown('z'))+((-1.0546593270245028e-15*Unknown('z'))+((0.0*Unknown('z'))+((2.8149282041245853*Unknown('z'))+((0.0*Unknown('z'))+((1.0436731990803598e-15*Unknown('z'))+((0*Unknown('z'))+104.72967660999984))))))))))
>>> mc2.collapse_on('y').collapse_on('z')
((-1.0*Unknown('z'))+((0*Unknown('y'))+104.72967660999984))


Old behavior (bad):

Code:
>>> print mc2.collapse_on('y')
((0*Unknown('y'))+104.72967661)


But doesn't matter if you only have 1 variable Smile


[edit2]
to change the broadcasting problem:

change

Code:
rval[unsolved_indices] = rval[unsolved_indices](z=real_z)


to

Code:
rval[unsolved_indices] = numpy.vectorize(lambda v:v(z=real_z))(rval[unsolved_indices])
Sadness; the problem remains. Sad


Code:
Found no solution to equation ((92.6517381562*z)+((358.561845721*z)+((7.57092809613e-15*z)+((-450.213583877*z)+((-92.4463991738*z)+((-357.767184703*z)+((-7.57092809613e-15*z)+((450.213583877*z)+((-1*z)+-3.63797880709e-12)))))))))=0
Found no solution to equation ((92.6517381562*z)+((358.561845721*z)+((7.57092809613e-15*z)+((-450.213583877*z)+((-92.4463991738*z)+((-357.767184703*z)+((-7.57092809613e-15*z)+((450.213583877*z)+((-1*z)+-3.63797880709e-12)))))))))=0
Traceback (most recent call last):
  File "./kmz2mc.py", line 503, in <module>
    main()
  File "./kmz2mc.py", line 469, in main
    t2v.geom_tri2voxel(tri)
  File "./kmz2mc.py", line 221, in geom_tri2voxel
    x1, y1, z1, filled = self.geom_bary(tv, x0, y0, z0)
  File "./kmz2mc.py", line 354, in geom_bary
    [l1, l2, l3, x, y, z] = self.geom_cart2bary(tri, x, y, z)
  File "./kmz2mc.py", line 307, in geom_cart2bary
    rval[unsolved_indices] = np.vectorize(lambda v:v(z=real_z))(rval[unsolved_indices])
TypeError: only integer arrays with one element can be converted to an index
What version of numpy are you running? Sad I'm not using anything fancier than http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing

If all else fails, you can convert that to a series of 4 sequential assigns.

[edit]

Also, I've uploaded Unknown.py to
https://github.com/elfprince13/ModelDecomposition
elfprince13 wrote:
If all else fails, you can convert that to a series of 4 sequential assigns.

[edit]

Also, I've uploaded Unknown.py to
https://github.com/elfprince13/ModelDecomposition
I'll probably just do the assigns. By the way, don't I need to collapse 4 times, once each for each of the lambdas and once for the unknown Cartesian coordinate? Also, I am thinking of doing this process only if projection yields a degenerate triangle, for the sake of a shorter code path. Finally, I'll be opening a repo for this code soon.
Nope! Just collapse once to be able to solve the m*z+b=0 for z=-b/m, the __call__ automatically collapses all variables passed to it.


And it might be worthwhile to use timeit on a batch of these for profiling purposes to compare our two codepaths - I still suspect that this will be performant against your python version because of the improved used of vectorization.

Incidentally, double check the end of the big chunk of linear algebra code. I made a copy + paste error and only the second of these two lines is necessary:

Code:
rval = [lambda1,lambda2,lambda3,x,y,lambda1*trimatrix[2,0]+lambda2*trimatrix[2,1]+lambda3*trimatrix[2,2]]

rval = array([lambda1,lambda2,lambda3,x,y,z])



As far as repos go, it might be worthwhile to maintain a shared codebase (with separate forks, but same project history) on this project in case we want to merge our projects down the road, given the closely related endgoals.
elfprince13 wrote:
Nope! Just collapse once to be able to solve the m*z+b=0 for z=-b/m, the __call__ automatically collapses all variables passed to it.
Hmm, I'm confused about that. It doesn't seem to me that that would help the lambda1, lambda2, and lambda3 = 1. - lambda1 - lambda2 being represented with Unknowns. And outputting the final rval for "successful" solutions seems to confirm this:

Code:
[((-1.7774277720139356e-16*Unknown('z'))+((-0.0*Unknown('z'))+0.5940936327724491)), ((4.5439602130017615e-32*Unknown('z'))+((3.8553084660917634*Unknown('z'))+-601.6324949794442)), ((1.7774277720139356e-16*Unknown('z'))+((0.0*Unknown('z'))+((-4.5439602130017615e-32*Unknown('z'))+((-3.8553084660917634*Unknown('z'))+602.0384013466718)))), 85.200000000000003, 60.700000000000003, -524.63352749989349]

I also notice that every solution is either -524.6335275 or 0.0, which worries me, considering that neither of those are valid coordinates for this model.
Are you sure you're __call__ing properly in the code where you replaced the vectorized assign that wasn't working?
This:

Code:
>>> real_z=-524
>>> ((-1.7774277720139356e-16*Unknown('z'))+((-0.0*Unknown('z'))+0.5940936327724491))(z=real_z)
0.5940936327725422
>>> ((4.5439602130017615e-32*Unknown('z'))+((3.8553084660917634*Unknown('z'))+-601.6324949794442))(z=real_z)
-2621.814131211528
>>> ((1.7774277720139356e-16*Unknown('z'))+((0.0*Unknown('z'))+((-4.5439602130017615e-32*Unknown('z'))+((-3.8553084660917634*Unknown('z'))+602.0384013466718))))(z=real_z)
2622.2200375787556

Replaces the unknowns just fine.

Also, can you maybe pastebin your current code/some input matrices/coordinate sets so I can debug?


[edit]

Looked at pastebin.

Code:
        real_z = -solved.sterm / solved.mterm
        if real_z < epsilon:
            print real_z
            rval[3+c] = real_z
        else:
            print "Found no solution to equation %s=0" % (str(rval[3+c] - P[2,0]))
            return [0., 0., 0., x, y, z]
 
        print rval
        return rval


Should be:

Code:
        real_z = -solved.sterm / solved.mterm
        if solved(z=real_z) < epsilon:
            print real_z
            rval[0] = rval[0](z=real_z)
            rval[1] = rval[1](z=real_z)
            rval[2] = rval[2](z=real_z)
            rval[3+c] = rval[3+c](z=real_z)
        else:
            print "Found no solution to equation %s=0" % (str(rval[3+c] - P[2,0]))
            return [0., 0., 0., x, y, z]
 
        print rval
        return rval


The < epsilon bit was a typo on my part, but the rval thing is the proper replacement for the

Code:
    unsolved_indices = array([1,2,3,3+c])
    rval[unsolved_indices] = rval[unsolved_indices](z=real_z)

or

Code:
    unsolved_indices = array([1,2,3,3+c])
    rval[unsolved_indices] = numpy.vectorize(lambda v:v(z=real_z)(rval[unsolved_indices])

that weren't working

[edit 2]
You dropped your line defining Pplanar somewhere along the way. It should be erroring at you unless you're running it interactively and it's now a constant global.
Nope, that line is there. And I have the proper solved and real_z stuff there now. :/ I'm still getting huge numbers of unsolveable things that seem to actually match the degeneracies I was seeing. In fact, that's pretty logical to me, considering I'm still passing your version of this function unknown variables that could be anywhere along a line in-plane with the triangle, essentially. I figured out the condition that causes this (below) and will try to do some axis-flipping before I even get to your or my version of this function:
Code:
        if (tv[0,majax] == tv[1,majax] and tv[0,smjax] == tv[1,smjax]) or \
           (tv[1,majax] == tv[2,majax] and tv[1,smjax] == tv[2,smjax]) or \
           (tv[2,majax] == tv[0,majax] and tv[2,smjax] == tv[0,smjax]):
            print("***This triangle will be degenerate***")
(tv is tri)
I don't think axis flipping will help a real degenerate triangle, because linear algebra (a singular matrix is a singular matrix), and I'm quite surprised that mine is having trouble with any correct geometry, since all of the transformations should be isometries.

Can you do a full run overnight and see if the output from mine matches or is more or less complete from your version of the function? It should at least resolve issues where the triangle projection was producing a line.

Analytically/geometrically speaking, can you describe the condition that is causing problems? My spatial reasoning/mental code interpretation module has switched off for the night
elfprince13 wrote:
I don't think axis flipping will help a real degenerate triangle, because linear algebra (a singular matrix is a singular matrix), and I'm quite surprised that mine is having trouble with any correct geometry, since all of the transformations should be isometries.
The problem isn't the tri; all of the tris that we're giving the geom_cart2bary() in any of its forms are fine. The problem is the way I iterate over triangles. I measure the triangle's projection onto each axis, and iterate over the two longest axes, to make sure the generated voxel patch will have no holes. For each point along the plane generated by iterating along those two axes, it asks geom_cart2bary() to find the third coordinate on the triangle. It then generates a voxel there.

However, there are rare triangles that have lines parallel to one of the three axes, AND when I pick the long axes, the projection of the triangle onto the plane I pick is a line. When I give your function or mine points in that plane, they do not intersect the triangle's plane at all! Which means everything goes squiffy.



In fact, doing axis flipping DOES help, as I demonstrate in this screenshot. Previously-missing regions turn into proper solids or slatted solids. The slats are because it's not iterating with a high enough resolution in the new axis chosen. I plan to find a way to resolve that.



Quote:
Can you do a full run overnight and see if the output from mine matches or is more or less complete from your version of the function? It should at least resolve issues where the triangle projection was producing a line.
I can.

Quote:
Analytically/geometrically speaking, can you describe the condition that is causing problems? My spatial reasoning/mental code interpretation module has switched off for the night
See above. Smile
Maybe I'm just misinterpreting the process, but it seems like it would make more sense to iterate in triangle-coordinate land, and project afterwards? That way you know your target surface will always be a square with a particular resolution, rather than a line or arbitrarily small triangle?
elfprince13 wrote:
Maybe I'm just misinterpreting the process, but it seems like it would make more sense to iterate in triangle-coordinate land, and project afterwards? That way you know your target surface will always be a square with a particular resolution, rather than a line or arbitrarily small triangle?
Yeah, that would work very nicely indeed, and would probably remove a "thin triangle" issue I've been struggling with, wherein certain very thin triangles disappear entirely, leading features made up of thin triangles to not show up. How would you recommend that I go about that?
What about something like this:

For a triangle with edges of length l1, l2, l3. subdivide l1, l2, and l3 each into something like m= ceil(ln / voxel_side_length) line segments (i.e m+1 vertices), and generate a "grid" (or rather several overlapping grids) in barycentric coordinates, and for each point on the grid, calculate the interior normal vector.

Any voxel containing 40% or more of the interior normal vector gets filled.

Select output color/texture using a weighted function that counts based on the total length of interior nomal vector contained by that voxel from a given source texture.
elfprince13 wrote:
What about something like this:

For a triangle with edges of length l1, l2, l3. subdivide l1, l2, and l3 each into something like m= ceil(ln / voxel_side_length) line segments (i.e m+1 vertices), and generate a "grid" (or rather several overlapping grids) in barycentric coordinates, and for each point on the grid, calculate the interior normal vector.
What does the internal normal vector mean in the context of a planar triangle (or just a triangle, since all triangles are planar)?

Counterproposal: Pick the two longest sides as the "axes" for iteration. Iterate on voxel_side/2 steps or so, picking lambda l1 and l2 based on that "grid" (of course, it's not necessarily a square grid). Generate l3, x, y, and z from that. If the point is inside the triangle, rasterize to the voxel grid and fill the corresponding voxel. Use current method of scanning over a quadrilateral section of the texture to compute the average color inside that quad section of the triangle.
In virtually all computer graphics software, vertex order specifies which side of the triangle is the front and which is the back, and then back face can then be safely not drawn. The normal vector pointing out from the "in/back" side of the triangle is the "interior normal vector" for a closed mesh.

I worry that iterating the barycentric "grid" over only 2 axes will lead to empty spaces on triangles where all 3 sides are relatively close to the same length.

Filling based on normals rather than faces also should give a better approximation of the surface, because you're filling voxels that would be significantly occupied by a skin of a certain thickness, rather than simply anywhere a triangle passes through a teensy corner of a cube.

Example: Blue circle is filling by normal/volume, red is filling by surface.


You can of course tune the ratios needed to fill a voxel, and even have different ratios depending on whether you're looking at the base or tip of the normal vector (if you haven't filled yet, accept 30% coverage to prevent a hole, but prefer 60%+ near the outside).
elfprince13 wrote:
In virtually all computer graphics software, vertex order specifies which side of the triangle is the front and which is the back, and then back face can then be safely not drawn. The normal vector pointing out from the "in/back" side of the triangle is the "interior normal vector" for a closed mesh.

I worry that iterating the barycentric "grid" over only 2 axes will lead to empty spaces on triangles where all 3 sides are relatively close to the same length.

Filling based on normals rather than faces also should give a better approximation of the surface, because you're filling voxels that would be significantly occupied by a skin of a certain thickness, rather than simply anywhere a triangle passes through a teensy corner of a cube.

You can of course tune the ratios needed to fill a voxel, and even have different ratios depending on whether you're looking at the base or tip of the normal vector (if you haven't filled yet, accept 30% coverage to prevent a hole, but prefer 60%+ near the outside).
That all makes perfect sense to me, thanks for the great (and diagrammatic) explanation. So you're suggesting iterating over three axes, correct? Projecting lines perpendicular to each side across the triangle, and then generating a normal at every point of intersection? Or more specifically:

1) Iterate L1 over the length of one side
2) Iterate L2 over the length of another side
3) Iterate L3 over the length of the last side
4) For every L1L2, L2L3, or L1L3 intersection:
4a) Compute the normal (cross the vectors from the edges to the points?)
4b) Figure out what voxel(s) that normal lies inside
4c) If >x% of that normal lies inside any voxel, fill the voxel based on a patch of the texture on the triangle surface

Edit: Just for fun, the Metropolitan Museum. Note the missing triangles.
KermMartian wrote:
That all makes perfect sense to me, thanks for the great (and diagrammatic) explanation. So you're suggesting iterating over three axes, correct? Projecting lines perpendicular to each side across the triangle, and then generating a normal at every point of intersection? Or more specifically:

1) Iterate L1 over the length of one side
2) Iterate L2 over the length of another side
3) Iterate L3 over the length of the last side
4) For every L1L2, L2L3, or L1L3 intersection:


Yes Smile So in terms of complexity it's really more like iterating over 2 axes 3 times than it is like iterating over 3 axes


Quote:
4a) Compute the normal
(cross the vectors from the edges to the points?)

Models typically store vertices in counter-clockwise order, so I *think*
given Cartesian coordinate vectors for v1,v2,v3, the direction of the exterior normal vector is given by (v2-v1) x (v3 - v2), which means the interior normal vector is given by (v3 - v2) x (v2 - v1) (this is essentially just the right-hand rule as applied to solenoids and magnetic fields). Obviously, you can double check this if the results seem fishy.

Quote:
4b) Figure out what voxel(s) that normal lies inside
4c) If >x% of that normal lies inside any voxel, fill the voxel based on a patch of the texture on the triangle surface

Yep! Though since voxels may have many different normals extending into them, you may want to do some kind of weighted decision making to choose the textures.
Quote:

Edit: Just for fun, the Metropolitan Museum. Note the missing triangles.

Very cool! Smile
That all makes sense. I got stuck when I got to the point where I'll be taking points on the edges of edges, casting them into (or out of!) the triangle perpendicular to those edges, and finding the points of intersection. Any insights?


Code:
        # Step 3: Find the length of each side
        L1 = self.geom_eucdist(tv[0],tv[1])
        L2 = self.geom_eucdist(tv[1],tv[2])
        L3 = self.geom_eucdist(tv[2],tv[0])

        # Step 4: Figure out the spacing along each side
        Lspace = 1/array([L1,L2,L3])
        Lspan = [np.ceil(L1),np.ceil(L2),np.ceil(L3)]
        L1s = np.linspace(0,1,1+Lspan[0])
        L2s = np.linspace(0,1,1+Lspan[1])
        L3s = np.linspace(0,1,1+Lspan[2])
        edges = [L1s,L2s,L3s]
       
        # Step 5: Iterate!
        cas = [0,1,0]
        cbs = [1,2,2]
        for i in xrange(3):
            for ca in edges[cas[i]]:
                for cb in edges[cbs[i]]:
                    # Print compute either bary or xyz coords from
                    # points in tv and ca, (1-ca), cb, and (1-cb)?
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
» Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8, 9, 10  Next
» View previous topic :: View next topic  
Page 3 of 10
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement