Texture mapping a helix

Hey all,
I’m running into some problems mapping a texture onto a helix form. Was hoping someone could look at the current results and my code to help me figure it out. It seems to be grabbing just one point of the image (where there’s the 9 lines near the far right side). Not sure if I’m misunderstanding how the texCoords work in relation to the image, or if it’s just an oversight in my code.

Thanks!

``````

mesh.setMode(OF_PRIMITIVE_TRIANGLE_STRIP);
for(int i = 1; i < points.size(); i++) {
ofVec3f thisPoint = points[i-1];
ofVec3f nextPoint = points[i];

int imgX = (int)floor(testImg2.width * ( i / (points.size() )));

//Map and draw vertecies
if(i%1) {
int imgX = (int)(testImg2.getWidth() * ((int)((i+1)/2) / (points.size()/2)));
} else {
}
}

//Draw loop
testImg2.getTextureReference().bind();
if(panel.getValueB("wireframe")) {
mesh.drawWireframe();
} else { mesh.drawFaces(); }
testImg2.getTextureReference().unbind();
glDisable(GL_DEPTH_TEST);

``````

Not sure what this does:

``````
if(i%1) {

``````

If you’re using ARB coords your texture coords should go from 0,0 to \$img.width,\$img.height, so you’ll want to make sure that as you’re adding vertices you add the texture coordinates, particularly on the x axis, that you know how many points along the X axis you’re adding. I can’t see how you’re generating the vertex positions, so something like this:

``````
int imgX = (int) floor(testImg2.width / pointsX.size());

``````

Thanks for the feedback!
I was using the i%1 to switch between drawing a vertex on the perimeter of the helix, and on the center of the helix, based on my understanding of how the Triangle strip mode works.

I think I might be misunderstanding how the texcoords work. The idea, as I understand it, is that (say) if I want to map a square image onto let’s say a cylinder, I would want to add a texCoord as ofVec2f( imageXcoord, imageYcoord) before adding the vertex as I generate the cylinder vertecies? So what I’ve been doing is as the helix generates, I’m adding the image X values to the right vertex via this:

``````

int imgX = (int)floor(testImg2.width * ((i-1) / points.size()) );

``````

which as I currently understand it, should be going through the image pixels, assigning the appropriate point to the appropriate vertex?

and the code that generates the points is this:

``````

for(int days=1; days<=maxDays; days++) {
for(int hours=0; hours<23; hours++) {
unwrapPoint = 1.0f;
float rotation = (hours*(TWO_PI * unwrapPoint)) / 23;
float x = radius * cos(rotation);
float y = radius * sin(rotation);
float z = (depth * (hours+1)/24) + (depth * days);

ofVec3f clockPosition(x,y,z);
array->push_back(clockPosition);
}
}

``````

Sorry for the slow reply too, am in the mountains of BC right now and not much internets. Thanks!

No problem, generally you just want to figure out where along your geometry each part of the image should go, so if you have a geometry that is 400 pixels by 400 pixels you need to figure out where along those 400x400 the texture coordinates need to lie.

The idea, as I understand it, is that (say) if I want to map a square image onto let’s say a cylinder, I would want to add a texCoord as ofVec2f( imageXcoord, imageYcoord) before adding the vertex as I generate the cylinder vertecies?

The order doesn’t matter, as long as the two vectors correlate (if that makes sense, vertex[0] -> texCoords[0])

I usually do something like this:

``````

for(r = 0; r < rings; r++) for(s = 0; s < sectors; s++)
{
float const y = sin( -M_PI_2 + M_PI * r * R );
float const x = cos(2*M_PI * s * S) * sin( M_PI * r * R );
float const z = sin(2*M_PI * s * S) * sin( M_PI * r * R );