This is rarely necessary with modern day game engines as they are probably more efficiently processed using vertex shaders rather than doubling the triangle count.
A similar technique is still used with older game engines such as certain maps created for Quake III Arena. Rather than duplicate each polygon, the compiler Q3Map2 offsets each surface and applies a black texture to each backface creating the illusion of inked edges.
Thanks for the comments guys :) Obsidian is right, i also don't believe that this tech is necessary in modern games. But i love this approach because it shows the trick in a good way and hey, maybe some guy has an engine which doesn't have cool shaders and can do this workaround to achieve an comic look :)
I find the geometric approach can still have qualitative differences from a screen space outline - enough that you'd consider using one over the other. With screen space approaches, you get a uniform pixel line. With a world space geometry shift, the line gets thicker as you get closer to it. You can, of course, do some clever stuff in shaders to blend between. Also, the sobel approach is really super generic, and looks robotic in its implementation, whereas using vertex data starts to give you way more artisanal control over line quality - control width, colour etc. across different sections of a mesh at a vertex or even texel level.
But yes, this can completely be done with a vertex shader, rather than burning it into your model data.
The thing to watch out for with the vertex shader approach is that you always need smooth normals to pull off the effect - if you have a flat shaded normals, the "shell" that you create will have open (non manifold) edges all over the place, meaning that when you move the vertices along the normals, the normals aren't shared between verts, so they diverge/converge. Basically, your shell becomes a mess.
In essence, you DO want a copy of whatever you're using but without doing the "move along normals" step (let the vertex shader do this trivial step). This does mean you're kinda doubling up data, but that data can then be interpreted in fun different ways by the shader (use vertex color luminescence for line width, alpha for vertex transparency, or build the UVs like a flow map so that you can "Grow in" your silhouettes).
You mention some really good points here. Especially that you have thicker near lines and thinner far lines is great from an composition standpoint. That the data will exist twice is true...but as far as i understood it's not too bad because there's no heavy material needed for the outline and i think render polygons isn't the big problem for the hardware. Correct me if i'm wrong :D Oh nice point with the shading - that's something i didn't think about.
Ok, but is it better than just adding sobel edge post processing?
ReplyDeleteThis is rarely necessary with modern day game engines as they are probably more efficiently processed using vertex shaders rather than doubling the triangle count.
ReplyDeleteA similar technique is still used with older game engines such as certain maps created for Quake III Arena. Rather than duplicate each polygon, the compiler Q3Map2 offsets each surface and applies a black texture to each backface creating the illusion of inked edges.
Documentation:
http://q3map2.robotrenegade.com/docs/shader_manual/cel-shading.html
Example image:
http://webpages.charter.net/phobos/images/cel_screenshot.jpg
Other similar examples:
http://www.blog.radiator.debacle.us/2010/07/geocomp2-demon-pigs-go-hog-wild-by.html
Thanks for the comments guys :) Obsidian is right, i also don't believe that this tech is necessary in modern games. But i love this approach because it shows the trick in a good way and hey, maybe some guy has an engine which doesn't have cool shaders and can do this workaround to achieve an comic look :)
ReplyDeleteI find the geometric approach can still have qualitative differences from a screen space outline - enough that you'd consider using one over the other. With screen space approaches, you get a uniform pixel line. With a world space geometry shift, the line gets thicker as you get closer to it. You can, of course, do some clever stuff in shaders to blend between. Also, the sobel approach is really super generic, and looks robotic in its implementation, whereas using vertex data starts to give you way more artisanal control over line quality - control width, colour etc. across different sections of a mesh at a vertex or even texel level.
ReplyDeleteBut yes, this can completely be done with a vertex shader, rather than burning it into your model data.
The thing to watch out for with the vertex shader approach is that you always need smooth normals to pull off the effect - if you have a flat shaded normals, the "shell" that you create will have open (non manifold) edges all over the place, meaning that when you move the vertices along the normals, the normals aren't shared between verts, so they diverge/converge. Basically, your shell becomes a mess.
In essence, you DO want a copy of whatever you're using but without doing the "move along normals" step (let the vertex shader do this trivial step). This does mean you're kinda doubling up data, but that data can then be interpreted in fun different ways by the shader (use vertex color luminescence for line width, alpha for vertex transparency, or build the UVs like a flow map so that you can "Grow in" your silhouettes).
You mention some really good points here. Especially that you have thicker near lines and thinner far lines is great from an composition standpoint. That the data will exist twice is true...but as far as i understood it's not too bad because there's no heavy material needed for the outline and i think render polygons isn't the big problem for the hardware. Correct me if i'm wrong :D Oh nice point with the shading - that's something i didn't think about.
DeleteThanks for the comment!