Why are texture coordinates often called UVs?

Computer Graphics Asked by samanthaj on August 27, 2021

Is there some historical reason texture coordinates are often called UVs? I get that vertex positions are x, y, z but even OpenGL has TEXTURE_WRAP_S and TEXTURE_WRAP_T and GLSL has aliases so if texture coord is in a vec you can access it with

someVec.st


but not

someVec.uv  (these would be the 3rd and 4th elements of the vector)


And yet pretty much every modeling package calls them UVs Maya, Blender, Unity, Unreal, 3dsmax

Where does the term UVs come from? Is this a known part of computer graphics history or is the reason they are called UVs lost in pixels of cg time?

In math, geometry and physics it is common practice to use the coordinates $$(u,v)$$ to represent an arbitrary parameterization, including those of a surface in a 3d Euclidean space. Since the coordinates of the parameterisation might be arbitrary (it could be an angle, or a function of the Euclidean coordinates $$(x,y,z)$$, or something else), it is helpful to distinguish them from the coordinates used to represent the wider Euclidean space in which the surface exists.

The $$(u,v)$$-notation caught on in computer graphics for the same reason: it clarifies that the coordinates used to index into your texture do not necessarily align with the world space (or view space) coordinates $$(x,y,z)$$, but is an index into a parameterisation of a surface in that space.

Answered by Jessica Hansen on August 27, 2021

This is not a definitive answer, but it is generally accepted that Ed Catmull introduced Texture Mapping in his 1974 thesis, "A SUBDIVISION ALGORITHM FOR COMPUTER DISPLAY OF CURVED SURFACES"

In that, he uses (U,V) to access the image data (see the page labeled 36 in the above)

MAPPING
Photographs, drawings, or any picture can be mapped onto bivariate patches. This is one of the most interesting consequences of the patch splitting algorithm. It gives a method for putting texture, drawings, or photographs onto surfaces....

...If a photograph is scanned in at a resolution of x times y then every element can be referenced by u·x and v·y where 0<=u,v<=1. In general, one could think of the intensity as a function l(u,v) where I references a picture.

I believe this thesis also introduced the concept of the Z-Buffer (Page 32)

Answered by Simon F on August 27, 2021