Skip to content

1. Geometry

Rubén Rodríguez edited this page Feb 4, 2020 · 5 revisions

Geminis

This is the repository that is going to be used for the different works and projects of the Computer Graphics 2019-2020 course. Throughout the document we will specify the works proposed by the teaching staff of the subject as well as the final evaluation projects.

Phase 1 - Geometry

Goal

  • Apply theoretical knowledge about geometry.
  • Design and implement basic geometrical structures that will be used for later assign-ments.

Description of the work

The work in this phase has consisted of implementing a software that allows to control interplanetary travels. The software has been designed to provide:

  • The outgoing direction from the launching station (from the point of view of thelaunching station).
  • The incoming direction at the receiving station (from the point of view of the receivingstation).

In order to carry out this task, the following types of data had to be modeled:

Points and Directions

The type of data Point represents the position of a place in a space of three dimensions. This type of data has three componentes which represent the values of the x-axis, the y-axis and the z-axis. In addition, there is a fourth component, called homogeneous component that takes value 1 and that is common with the data type Direction.

The type of data Direction represents the directions in a space of three dimensions. This type of data has three componentes which represent the values of the x-axis, the y-axis and the z-axis. As mentioned earlier, the data type Direction also has a homogeneous component whose value is zero.

Thus, since both Point and Direction are identical structures, the homogeneous component makes it possible to distinguish them.

For each of the above types of data, according to the didactic material of the subject, we have elaborated those operations that make logical sense in space, such as the addition and subtraction of directions, the subtraction of two points to obtain the direction between them, the scalar and vectorial product of directions, the module of direction vectors, changes of base, etcetera.

Transformation matrix

The transformation matrix is a type of data that allows to represent in a compact way the geometric transformations in three dimensions on the basis of the homogeneous component that distinguishes points from directions.

Thanks to the transformation matrices it has been possible to carry out operations such as the rotation in any of the x, y or z axes. Additionally, it has been possible to carry out scaling operations and base changes in order to achieve correct interplanetary connections in local coordinates of the planets and in UCS coordinates.

Planets

Planets are modeled as perfect spheres. They are defined by:

  • Center: A point in space, measured from the Universe Coordinate System (UCS).
  • Axis: The direction that connects the South Pole with the North Pole of the planet. It’s modulus should
    therefore be twice the radius of the planet.
  • Reference city: The position in space for the reference city for the planet, from whichthe azimuth (longitude) is measured, measured from the UCS. On Earth, this city isGreenwich. The distance between this reference city and the planet’s center is theradius of the planet.

When asked for a planet, the user will introduce these three vectors. The system mustdouble check that the radius defined by the axis and by the distance between the center andthe reference city is the same (maximum error of10−6). The following image graphically illustrates the representation of the planet:

Planetary station

A planetary station are positioned at a specific location on the surface on the planet, definedby:

  • Inclination (θ): The angle with respect to the planetary axis (that connects thesouth pole to the north pole), similar to the Earth’s latitude but measured from theaxis instead of the equator. It is measured in radians within the range(0,π).
  • Azimuth (φ): The angle around the globe with respect to a specific0−meridian,similar to the Earth’s longitude but the reference meridian is obviously not Greenwichbecause Greenwich is a city on Earth. It is measured in radians within the range(−π,π].

The following image graphically illustrates the representation of the planetary station:

The coordinate system of the station is defined by the longitude tangent direction as theivector (first axis), the latitude tangent direction as thejvector (second axis) and thesurface normal as thekvector (third axis). All three directions are linearly independent(perpendicular). These coordinates system can be seen from the global (UCS) point.

When asked for a planetary station, the user will first introduce the planet (the threevectors) followed by the inclination and azimuth of the station. The system then computethe position of the station and the corresponding coordinate system.

Interplanetary conection

Given two stations the connection in the Universe Coordinate System is the direction fromlaunching station’s position with the receiving station’s position. However, for the transport to work properly, each station needs the connection on its specific coordinatesystem, as seen at ground level on each plane. The following images represent the connection between two stations, including the local coordinate systems of both stationsdisplayed represented as three colored vectors (per station) and the representation of the connection as seen from the local coordinate system of a singlestation respectively.

There is a downside, though. The quantum catapult requires between 1 and 2 seconds to reach its peak 20c speed. At that peak speed, the transported matter is completely outof phase at its very own specific frequency and can never collide with other matter, such as existing planets or stars, or even other transported out-of-phase matter. However, before that happens, the matter has a very high risk of in-phase collision. This is specially dangerous when the quantum catapult launcher is pointing towards the inside of its own planet, so as a safety mechanism the launcher will never work if pointing inwards.

The same happens on the other end: the quantum catapult receiver requires between 1and 2 seconds to dampen the speed, and in those moments the transported matter maycollide. Therefore the quantum receiver will never work if the matter reaches the station from within its own planet. The full prototype ask for two stations and then print on screen the connection between them from the two local coordinate systems of the two stations. It also give a warning if the trajectory of the matter traverses any of the two planets.

Implementation, compilation and execution

The implementation of the work has been developed in the source code editor Visual Studio Code. The compilation and execution of the program has been done both in a native Debian operating system and in a VirtualBox virtual machine. Both have the version Debian 10.

Tests

In order to be able to verify the correct functioning of the program, test programs have been periodically carried out. Thus, for each module of the program there is a repertoire of tests that verify its operation. In order to carry out the tests, cassert has been used, which defines one macro function that can be used as a standard debugging tool.

Ray Tracing

Goal

The goals of this assigment are:

  • Develop the key structures and functions for rendering algorithms.
  • Link mathematical concepts with practical code.
  • Be a test benchmark for potential new primitives, sensors or acceleration structures.

Sensor

The rays cast from a sensor camera model. The model followed is a pinhole camera (see Figure 1). Bear in mind that the size of the projection plane (given by the up and left vectors) is proportional to the resolution of the image (in pixels). Otherwise the geometry appears distorted according to the aspect ratio. The boundaries of each pixel (related to the projection plane’s size and theimage resolution) are calculated in order to cast multiple rays per pixel. The number of rays per pixel is a command line parameter.

Geometrical primitives

The geometry consists on the following primitives:

  • Sphere:given by its center and radius.
  • Planes: infinite plane given by its normal and its distance to origin.

Each primitive can be intersected with a ray: this results into a system of equations in which the ray has the equation of a line and the specific geometry has its implicit equation.The system is solved by substitution to get the parameter of a ray. Additional to its geometrical information, each primitive have an emission property, a red-green-blue tuple, which represents the color of the primitive.

Rendering

The application is a renderer that casts a single ray per pixel and puts in the corre-sponding pixel coordinate the emission of the closest intersected primitive. The scenes are hardcoded in the source code. The image is saved for further visualization. Some tests have been done to verify the effect of the number of objects and the image resolution onthe rendering time.

These images are examples of testing images to verify how the ray tracer works:

Documentation

The documentation of the code has been carried out using a format similar to Javadoc.

Documentation about cassert tool for C++: http://arco.esi.uclm.es/~david.villa/pensar_en_C++/vol1/ch03s09s03.html

Using a Makefile for compiling your program: https://www.ivofilot.nl/posts/view/19/Using+a+Makefile+for+compiling+your+program

Details of ppm image format in http://netpbm.sourceforge.net/doc/ppm.html

conversion from rgb to lab and vice versa: https://stackoverflow.com/questions/7880264/convert-lab-color-to-rgb

Clone this wiki locally