-
Notifications
You must be signed in to change notification settings - Fork 0
Three.js
Built on WebGL
Probably the widest-used library for working with WebGL. Whereas the WebGL (and OpenGL) APIs are relatively low-level and state-based, Three.js offers a more mid-level object-based interface, including a "scene-graph".
Note: WebGL2 support is still "in progress" and quite a few features don't seem to be active yet (late 2018).
At the heart of any Three.js project is an animation loop in which a THREE.WebGLRenderer
takes a THREE.Camera
and a THREE.Scene
to actually draw to the screen. Properties of the camera and members of the scene may also be modified within the animation loop:
function render() {
// update members & properties of the scene here for animation
// TODO
// now render the scene:
renderer.render(scene, camera);
}
renderer.setAnimationLoop(render);
Before this loop there will be setup code to define the renderer, the camera, and the scene. Three.js' ontology is roughly as follows:
- Renderer (THREE.WebGLRenderer)
- Camera (THREE.PerspectiveCamera, THREE.OrthographicCamera, THREE.StereoCamera, etc.)
- Scene (THREE.Scene)
- Meshes (THREE.Mesh or THREE.SkinnedMesh)
- Geometry (THREE has lots of built-in geometry types from simple BoxGeometry to arbitrary BufferGeometry)
- Material (THREE has lots of built-in material types from MeshBasicMaterial to customized ShaderMaterial)
- Lights (THREE has several light types to choose from)
- Possibly other scene entities
- Meshes (THREE.Mesh or THREE.SkinnedMesh)
A brief example:
let renderer = new THREE.WebGLRenderer({
// various options here
// render to the HTML element <canvas id="webgl-canvas"></canvas> in your page:
canvas: document.getElementById("webgl-canvas"),
});
// create camera
// (there are other kinds of cameras available, but this is a typical example)
let camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );
// configure camera, e.g. move it back from the origin:
camera.position.z = 5;
let scene = new THREE.Scene();
// configure scene
// e.g. create a Mesh,
// from a Geometry (the actual shape, defined by built-in shapes, or by writing Buffers)
// and a Material (how it looks, defined by buit-in Material types, or by writing Shaders)
// and add to the scene:
var geometry = new THREE.BoxGeometry(1, 1, 1);
var material = new THREE.MeshPhongMaterial({
color: "#fff",
flatShading: true,
overdraw: 0.5,
shininess: 0
});
var cube = new THREE.Mesh( geometry, material );
scene.add( cube );
// e.g. add some lights to the scene:
let ambientLight = new THREE.AmbientLight("#666");
scene.add( ambientLight );
let directionalLight = new THREE.DirectionalLight("#fff", 0.5);
directionalLight.position.set(3, 2, 1);
scene.add( directionalLight );
function render() {
// update members & properties of the scene here for animation
// e.g.
cube.rotation.x += 0.01;
cube.rotation.y += 0.01;
// now render the scene:
renderer.render(scene, camera);
}
renderer.setAnimationLoop(render);
A typical resize
function might look like this:
renderer.setSize( window.innerWidth, window.innerHeight );
window.addEventListener('resize', function() {
const w = window.innerWidth, h = window.innerHeight;
renderer.setSize(w, h);
camera.aspect = w / h;
camera.updateProjectionMatrix();
});
new THREE.Points(new THREE.BufferGeometry(), new THREE.PointsMaterial())
https://threejs.org/docs/?q=point#api/en/objects/Points
- Optionally, set index attribute for indexed buffer geometry
- supports raycast intersections
- has some kind of morph target capability
- draw subranges:
geometry.setDrawRange(start, count)
andattributes.setUsage( THREE.DynamicDrawUsage ) )
- optionally, add groups. Each group can be a different drawRange, and have a different material
https://threejs.org/docs/?q=point#api/en/materials/PointsMaterial
- colour / map / alphaMap for setting colour/texture (color all points at once, or use vertexColors: true for per-vertex
color
attribute) - size / sizeAtten (bool) for size-by-distance (all points at once)
- may also want to set blending: THREE.AdditiveBlending, depthTest: false, transparent: true
- For anything fancier, will need a custom RawShaderMaterial
Examples
- https://github.com/mrdoob/three.js/blob/master/examples/webgl_interactive_raycasting_points.html -- raycasting points
- https://github.com/mrdoob/three.js/blob/master/examples/webgl_points_dynamic.html -- cloud from OBJ, and also modifying geometry
- https://github.com/mrdoob/three.js/blob/master/examples/webgl_buffergeometry_points_interleaved.html -- using interleaved arraybuffer to store e.g. 32-bit position and 8-bit colour in a single 128-bit struct-per-particle
- https://github.com/mrdoob/three.js/blob/master/examples/webgl_buffergeometry_drawrange.html -- varying the drawRange of the buffer
To generate particle sprite textures, can load from images of course, but can also generate via HTML5 Canvas and new THREE.CanvasTexture(), or from raw data using new THREE.DataTexture().
Possibly also consider using instancedBufferGeometry with a quad? E.g. https://tympanus.net/codrops/2019/01/17/interactive-particles-with-three-js/
Codepen demo: https://codepen.io/grrrwaaa/pen/gOWyPNY
-
Events: the main animate() loop (requestAnimationFrame), any other standard web events (mouse/keyboard/network/etc.)
-
Renderer (webglrenderer): set canvas size, attach to DOM. Mostly static settings for rendering options.
-
Camera. A few kinds (perspective, ortho, cubemap, stereo, etc.). For VR will always be using perspective.
- There could be many cameras, but only one is used to render per frame
- Camera is also an Object3D
-
Scene graph of objects, in a tree hierarchy rooted at Scene.
- Object3D base class
- Has a pose (position/orientation/scale), children, parent, visible, etc. properties
- can be an empty container (but better to use Group for that),
- There could be many scenes, but only one is used to render per frame
- .layers: like tags. an object is only rendered if it has a layer tag in common with the camera. also used to filter raycasting.
-
Mesh is an Object3D with a Geometry and a Material (shader + textures)
-
Static objects should set object.matrixAutoUpdate = false;
-
For procedural geometries: BufferGeometries use typed arrays -- more flexible, faster, but less friendly.
-
For many objects, use InstancedBuffer etc.
-
For dynamic textures, keep setting texture.needsUpdate = true;
-
For postprocessing, see https://threejs.org/docs/index.html#manual/en/introduction/How-to-use-post-processing
-
Also: animation, raycasting, physics, positional audio,
-
Basic live code editor written by Three.js author https://mrdoob.com/projects/htmleditor/
- Source: https://github.com/mrdoob/htmleditor
- Essentially a document-level reloader: entire HTML doc is edited in text overlay and reloaded into iframe below it on each (successful) edit
-
A more visual editor also by Three.js https://threejs.org/editor/
- Source: https://github.com/mrdoob/three.js/tree/master/editor
- More of a scene-graph, unity-like interface.
- Top of tree has Camera, Scene (Renderer is in a settings page)
- Each node has inspector for object, material, geometry, and possible script components.
this
used to access the object (e.g. the Scene in the Scene's script), - an
update(event)
routine allows per-frame actions,pointermove(event)
, probably other callbacks. - Scene script acts as the main game script in most of the exmaples.
-
scene.getObjectByName( 'Brick' )
to find other objects. - Materials can be standard three.js materials or custom shaders (with GLSL code editors for interface / vertex shader / fragment shader)
-
A similar approach, more fleshed out, by Mozilla Hubs https://hubs.mozilla.com/spoke
- Includes asset import from google poly etc.
-
Built on React https://github.com/ekatzenstein/three.js-live
-
Built on Coffee etc. https://livecodelab.net
-
Built on Clojure script https://github.com/cassiel/threejs-figwheel-main
- See these docs: https://threejs.org/docs/index.html#manual/en/introduction/How-to-dispose-of-objects
From within VR, a code-oriented interface is almost unworkable. Some kind of visual interface would make more sense: editing in terms of structural components, intentions, relations, flows etc. rather than JS directly. In concept this is certainly feasible: A-Frame does the very same thing, using a DOM interface to generate Three.js code.
Perhaps in a form that can still be code-edited from desktop experiences. That means a projectional editor. (https://www.martinfowler.com/bliki/ProjectionalEditing.html, https://en.wikipedia.org/wiki/Structure_editor).
- Reload entire sub-page as an iframe (https://mrdoob.com/projects/htmleditor/)?
- Or modify scene graph / replace scene/renderer and replace animate() ?
- Something somewhere in between, in a more p5 style. Minimal example here: https://codepen.io/grrrwaaa/pen/yLMaYeR
iframe option has advantage of no leaky state (instead we have to provide serialization/deserialization to preserve state if & as desired). Any preference from VR perspective?