Skip to content
Graham Wakefield edited this page Aug 14, 2021 · 12 revisions

Built on WebGL

Probably the widest-used library for working with WebGL. Whereas the WebGL (and OpenGL) APIs are relatively low-level and state-based, Three.js offers a more mid-level object-based interface, including a "scene-graph".

Note: WebGL2 support is still "in progress" and quite a few features don't seem to be active yet (late 2018).

Minimal Three.js example

At the heart of any Three.js project is an animation loop in which a THREE.WebGLRenderer takes a THREE.Camera and a THREE.Scene to actually draw to the screen. Properties of the camera and members of the scene may also be modified within the animation loop:

function render() {
	// update members & properties of the scene here for animation
	// TODO

	// now render the scene:
	renderer.render(scene, camera);
}
renderer.setAnimationLoop(render);

Before this loop there will be setup code to define the renderer, the camera, and the scene. Three.js' ontology is roughly as follows:

  • Renderer (THREE.WebGLRenderer)
  • Camera (THREE.PerspectiveCamera, THREE.OrthographicCamera, THREE.StereoCamera, etc.)
  • Scene (THREE.Scene)
    • Meshes (THREE.Mesh or THREE.SkinnedMesh)
      • Geometry (THREE has lots of built-in geometry types from simple BoxGeometry to arbitrary BufferGeometry)
      • Material (THREE has lots of built-in material types from MeshBasicMaterial to customized ShaderMaterial)
    • Lights (THREE has several light types to choose from)
    • Possibly other scene entities

A brief example:

let renderer = new THREE.WebGLRenderer({
	// various options here
	// render to the HTML element <canvas id="webgl-canvas"></canvas> in your page:
	canvas: document.getElementById("webgl-canvas"),
});

// create camera
// (there are other kinds of cameras available, but this is a typical example)
let camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );
// configure camera, e.g. move it back from the origin:
camera.position.z = 5;

let scene = new THREE.Scene();
// configure scene

// e.g. create a Mesh, 
// from a Geometry (the actual shape, defined by built-in shapes, or by writing Buffers) 
// and a Material (how it looks, defined by buit-in Material types, or by writing Shaders)
// and add to the scene:
var geometry = new THREE.BoxGeometry(1, 1, 1);
var material = new THREE.MeshPhongMaterial({ 
	color: "#fff", 
	flatShading: true, 
	overdraw: 0.5, 
	shininess: 0 
});
var cube = new THREE.Mesh( geometry, material );
scene.add( cube );

// e.g. add some lights to the scene:
let ambientLight = new THREE.AmbientLight("#666");
scene.add( ambientLight );

let directionalLight = new THREE.DirectionalLight("#fff", 0.5);
directionalLight.position.set(3, 2, 1);
scene.add( directionalLight );

function render() {
	// update members & properties of the scene here for animation
	// e.g.
	cube.rotation.x += 0.01;
	cube.rotation.y += 0.01;

	// now render the scene:
	renderer.render(scene, camera);
}
renderer.setAnimationLoop(render);

A typical resize function might look like this:

renderer.setSize( window.innerWidth, window.innerHeight );
window.addEventListener('resize', function() {
	const w = window.innerWidth, h = window.innerHeight;
	renderer.setSize(w, h);
	camera.aspect = w / h;
	camera.updateProjectionMatrix();
});

Point clouds

new THREE.Points(new THREE.BufferGeometry(), new THREE.PointsMaterial())

https://threejs.org/docs/?q=point#api/en/objects/Points

  • Optionally, set index attribute for indexed buffer geometry
  • supports raycast intersections
  • has some kind of morph target capability
  • draw subranges: geometry.setDrawRange(start, count) and attributes.setUsage( THREE.DynamicDrawUsage ) )
  • optionally, add groups. Each group can be a different drawRange, and have a different material

https://threejs.org/docs/?q=point#api/en/materials/PointsMaterial

  • colour / map / alphaMap for setting colour/texture (color all points at once, or use vertexColors: true for per-vertex color attribute)
  • size / sizeAtten (bool) for size-by-distance (all points at once)
  • may also want to set blending: THREE.AdditiveBlending, depthTest: false, transparent: true
  • For anything fancier, will need a custom RawShaderMaterial

Examples

To generate particle sprite textures, can load from images of course, but can also generate via HTML5 Canvas and new THREE.CanvasTexture(), or from raw data using new THREE.DataTexture().

Possibly also consider using instancedBufferGeometry with a quad? E.g. https://tympanus.net/codrops/2019/01/17/interactive-particles-with-three-js/

Codepen demo: https://codepen.io/grrrwaaa/pen/gOWyPNY

Ontology

  • Events: the main animate() loop (requestAnimationFrame), any other standard web events (mouse/keyboard/network/etc.)

  • Renderer (webglrenderer): set canvas size, attach to DOM. Mostly static settings for rendering options.

  • Camera. A few kinds (perspective, ortho, cubemap, stereo, etc.). For VR will always be using perspective.

    • There could be many cameras, but only one is used to render per frame
    • Camera is also an Object3D
  • Scene graph of objects, in a tree hierarchy rooted at Scene.

    • Object3D base class
    • Has a pose (position/orientation/scale), children, parent, visible, etc. properties
    • can be an empty container (but better to use Group for that),
    • There could be many scenes, but only one is used to render per frame
    • .layers: like tags. an object is only rendered if it has a layer tag in common with the camera. also used to filter raycasting.
  • Mesh is an Object3D with a Geometry and a Material (shader + textures)

  • Static objects should set object.matrixAutoUpdate = false;

  • For procedural geometries: BufferGeometries use typed arrays -- more flexible, faster, but less friendly.

  • For many objects, use InstancedBuffer etc.

  • For dynamic textures, keep setting texture.needsUpdate = true;

  • For postprocessing, see https://threejs.org/docs/index.html#manual/en/introduction/How-to-use-post-processing

  • Also: animation, raycasting, physics, positional audio,


Live Coding Three.js? In VR?

Memory & cleanup

no text input in VR

From within VR, a code-oriented interface is almost unworkable. Some kind of visual interface would make more sense: editing in terms of structural components, intentions, relations, flows etc. rather than JS directly. In concept this is certainly feasible: A-Frame does the very same thing, using a DOM interface to generate Three.js code.

Perhaps in a form that can still be code-edited from desktop experiences. That means a projectional editor. (https://www.martinfowler.com/bliki/ProjectionalEditing.html, https://en.wikipedia.org/wiki/Structure_editor).

What level of abstraction?

iframe option has advantage of no leaky state (instead we have to provide serialization/deserialization to preserve state if & as desired). Any preference from VR perspective?