Computer graphics
Today, I will write about this computer graphics technology. It is just an introduction for how to make 3D images from 2D images with the computer and the opposite way (3D→2D).
Firstly, the study of computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. As an academic discipline, computer graphics studies the manipulation of visual and geometric information using computational techniques and it also focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. CG image types are classified to 5 types which is
Two-dimensional
Pixel art
Vector graphics
Three-dimensional
Computer animation
Nowadays three-dimensional technology is used the most and to accomplish this technology, rendering is used. The principles of CG technology is also classified to 5 principles mainly.
· Pixel
· Graphics
· Rendering
· Volume Rendering
· 3D modeling
I’ll explain these one by one in the following.
1. Pixel
A pixel is a single pint in a raster image in digital imaging. And they are placed on a regular 2 dimensional grid and are often represented using dots or squares. The computer graphics can’t deal with the sequential values, hence the object value is required to be divided and a divided point by this process is given a definition of pixel. For example 640*480 pixel picture shows that it’s expressed by arranging 640 points horizontally and 480 points vertically. Because the picture in the computer can’t displayed more precisely than a pixel unit, there are some- times Jaggies in the edge of the picture. To fix this problem, anti-aliasing is necessary which makes this jaggies more smoothly and anti-aliasing has mainly 2 kinds which is used the most. Anti-aliasing is one of the kinds in the rendering method. In over sampling method, firstly one pixel is considered as a gathering of many less pixels, a sub-pixel unit. Then when making a picture, they will use this sub-pixel unit and in the final step, they can get the result by calculating arithmetical average of their sub-pixels. Because this method can be performed easily and simply, they don’t have to change the original algorhythm a lot. But as the calculating time increases, the efficiency of the effect is saturated. On the other hand, in adaptive sampling method, this calculating time will decrease by defining the threshold value and comparing each gap of the adjacent sub-pixels to this value to decide whether or not it should be calculated. And this formula is given that, (each r, g, b is the RGB element of each pixel.)
diff = |r1 −r2|+|g1 −g2|+|b1 −b2| (1)
2. Graphics
Graphics are visual presentations on a surface, such as a computer screen and this example is photographs, drawing, graphics designs, maps and so forth. There are two types of CG which is raster graphics and vector graphics. The former one makes each pixel separately defined as in a digital photograph and the latter one uses the mathematical formulas to draw lines and shapes which are then interpreted at the viewer’s end to produce the graphic.
3. Rendering
Rendering is the process to make the scenes which could be projected in the hypothesis camera from the scenes which had been taken before. In this process it is necessary to calculate each object’s shape, the location and how to be exposed by the light with some algorhythms and the algorhythm has a lot of kinds and it should be chosen depending on each use. For example, if it is used in such a scene where rendering has to be performed in the real time, like a game project, it’s necessary to use such a algorhythm that is simple and has a fast data processing or to decrease the sum amount of polygons in the scene. Usually rendering takes a lot of time but especially if the high technology rendering algorhythm is used, it will take from several hours to several days. I’ll explain some kinds of rendering below.
3D projection is a method of mapping 3 dimensional points to a 2 dimensional plain. This method is also familiar as Projection mapping which uses designed CG with the computer and some instruments like a projector to project some images on some objects, buildings and the air.
Ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane. Ray tracing is performed against light, electronic wave, seismic wave and ultrasound. The merit of this method is they can display the object’s reflection or refraction but to deal with diffraction or approximation needs some modelings.
Shading refers to depicting depthin 3D models or illustrations by varying levels of darkness. This process is often used in drawing for depicting levels of darkness on the paper by applying media more densely or with a darker shade for darker areas and less densely or with a lighter shade for lighter areas. Shading changes the surface color of 3 dimensional model based on the angle between the surface and the light. Light used in shading has several kinds.
Directional light is considered of such a sun light that emits from very far, so the object can be exposed equality from the place set up and there is not a decay by distance. Therefore by considering only assigned color and direction, the calculation of the color of the apex is performed. Because the number of lighting coefficient is a few, the burden against the computer is the lightest in all lights’.
Secondary point light. After the light emits from one point, the light will spread out in all directions.
Ambient light exposes all objects in the scene equally and especially because it’s not necessary to assign the place of the light and the brightness of all objects is equal, shading is not required. The ambient light in one scene is given that AmbientLighting = Ca ∗[Ga + sum(Lai ∗Atteni ∗Spoti)] (2)
Ca is Material’s ambient color, Ga is Global’s ambient color, Lai is an ambient color of the order i’s light, Atteni is a decay of the order i’s light and Spoti is a spotlight coefficient of the order i’s light.
There is distance falloff when shading is used. Distance falloff occurs when the light power decreases depending on the distance from the light. The calculation of distance fallout is performed in the following way.
[Linear case]
Given that the distance from the light is x, decrease the amount of the light depending on x value.
[Quadratic case]
When the distance from the light becomes double, the amount of the light becomes quarter.
[Over Cubic (n-th degree)]
Given that the distance from the light is x, the amount of the light becomes 1/xn.
Texture mapping
Texture mapping is a method for adding detail, surface texture, or color to a computer-generated graphic or 3D model. Texture mapping is the electronic equivalent of applying wallpaper, paint, or veneer to a read object. The simplest texture mappings involve processes such as that shown below. Three identical squares, each covered randomly with dots, are directly mapped onto the three visible facets of a 3D cube. This distorts the size sand shapes of the dots on the top and right hand facets. In this mapping, the texture map covers the cube with no apparent discontinuities because of the way the dots are arranged on the squares.
In some mappings, the correspondence between the 2D texture map and the 3D object’s surface of a sphere. It’s impossible to paste checkered wallpaper onto a sphere without cutting the paper in such a way as to create discontinuities in the pattern. This problem occurs with many texture mappings.
And a complex pattern can, in some cases, be seamlessly wedded to the surface of a 3D object using a sophisticated graphics program. The pattern is generated directly on the 3D rendition, rather than using a texture map. For example, a sphere can be given a wood-grain finish. The squares on a sphere problem can’t be solved, but it’s possible to fit a pattern of triangles onto a sphere by adjusting the sizes of the triangles.
Anti-eliasing
Super sampling. After describing in the higher resolution than the resolution of the picture which is displayed, this picture in the higher resolution is converted to in such a resolution which is designed previously and the converted picture is displayed in the monitor.
Multi sampling. As Super sampling, the picture should be described in the higher resolution but the generating process of the pixel values is just the calculation of the display resolution. After that, this is copied in proportion to the higher resolution and it’s recorded.
4. Volume Rendering
Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set. This technique can express more various images than other technique but high calculation cost is required so is doesn’t suit in such a case which real time is required. It enable us to comfirm the whole image of the volume data . Hence it’s often used to display the medical images. In Figure 1, when the distance along with the light from the eyesight is defined in λ and the location in the volume is defined in x(λ), the amount of pixels, I, is
I =∫ D 0 c(x(λ))e−∫λ 0 α(x(λ′))dλ′dλ (3)
This is called raycasting method which can provide the best quality of the picture respectively. In this method each point’s CT value s(x) gives RGB values and the opacity α to each voxel point. By increasing the opacity α of each voxel, the object can be described with some opacity and also emphasized more.
In this method, each RGB values and each opacity α is The function c expresses RGB amounts emitted from each point and the function α expresses the opecity of each point. By these functions the display color is assigned to CT value s(x) given as the volume data. Usually c(x) and α is defined as the function of s(x) but in the real calculation, there are the following two steps.
Firstly the transfer function is put from CT value s(x) to the RGB values and α and the color’s contribution by each point’s CT value depends on that. This is equal to the effect of the light which emits from each volume point and this is also called clasification. The transfer function is defined by each user and doesn’t depend on the eyesight and is constant but it could be changed during the rendering process.
Next the color information given by the place relationship between each point, each light direction and the place of the original light is set and this is called shading. This includes the diffuse reflection of the light and the elements of the specular reflection. Therefore both effects of clasification and shading could be combined by calculating each value separately. In the case that the distribution of CT values is visualized, using only clasification can describe the image but on the other hand in the case that the geometrical image of the object is displayed, shading technology is required to recognize the 3 dimensional space correctly.
5. 3D modeling
3D modeling is the process of developing a mathematical, wireframe representation of any 3 di- mensional object or 3D model using specialized software. This 3D model can be displayed as a 2 dimensional image through a process like 3D rendering I mentioned already. This 3D models may be created by using multiple approaches like using NURDS curves to generate accurate and smooth sur- face patches, polygonal mesh modeling or polygonal mesh subdivision. Using Bezier Curve, NURDS curves and NURBS surfaces is defined in Spline modeling. On the other hand, using only a polygonal dimension like 3 or over 4 dimension is defined in polygonal modeling. The former suits the scene where extremely precise curves is required like manifacturing industrial products and the latter is better in the scene of API application and the organic shapes like human or animals.