Homework 2 Blog


                                                                                                          GTA VI
        

Greetings, and welcome to my blog! I will start the blog with my design choices and show some results. After that, I will show you a possible experiment with instancing.



Design Choices:

Setup: 

I have created a mesh class that loads the mesh given its obj file and stores the properties needed to render (like VAO, number of triangles, etc.). At the very beginning of the program, I load the scene into the mesh objects. Regarding textures, I use the stb_image library to load the textures from the disk. Each texture is then referenced by its id. 


Rendering:

The scene is rendered in 2 passes. The first pass is to render the scene to the dynamic cubemap, which will be later used by car mesh in the color pass. 

The First Pass (Reflection Pass):

In the first pass, I place the camera in the center of the car mesh (basically the car position), then render the scene without the car six times to the cubemap's faces. 

An Issue With Retina Displays of MAC:

I use MAC for development which has Retina Display. One thing about the Retina Displays is that the actual framebuffer size is actually double of the screen size. In the reflection pass, the viewport is changed to the size of the cubemap, and after that pass, it must be reset before the color pass. While resetting, I tried to use the screen size, but the scene rendered ended up covering half of the screen. After a professional research process, I found out the reality of the Retina Display. The nice thing is that frame buffer size can be queried with GLFW. 


So, the process is as follows:

 int width, height;

 glfwGetFramebufferSize(window, &width, &height);

 glViewport(0, 0, width, height);


The Second Pass (Color Pass):

There is nothing fancy here. I am just rendering the scene as usual. I bind the cubemap rendered in the first pass for the car mesh and use it for the reflections. I use Phong Shading for meshes except for the ground (there is no lighting for the ground). At the very end, I render the skybox using the trick described in: https://learnopengl.com/Advanced-OpenGL/Cubemaps


Some Results:






Experiment: Instanced Rendering

I am too lazy to place the meshes by hand. We need more meshes to see the reflection's capacity to its full extent. It would be very nice if we could place the meshes randomly with a single draw call. That's where instanced rendering comes into play. 


For an introduction to instancing, see: https://learnopengl.com/Advanced-OpenGL/Instancing


We can basically draw the same mesh multiple times (each of them called an instance) with a single draw call. As we handle drawing in a single call, instead of making a draw call for each mesh separately, it is so efficient. 


Discussion: Transformation for Instances

The idea behind the instancing is sending the data from the CPU to GPU once and in a single draw call rendering every instance. This means each instance sees the same layouts and uniforms. Therefore, we have a built-in semantic called gl_InstanceID (id of the instance) to differentiate the properties between the instances. In the abovementioned link, you can see how transformations can be set for each instance. However, storing a transformation for each mesh is wasteful and inefficient. It will be even worse if the data are changing over time. Imagine rendering snow instances; in each frame, they need to move, which means updating the fields on the GPU each time. This will be too slow for a real-time rendering task. Instead, we can leverage the power of math and randomness. Using gl_InstanceID as a seed, we can use hash functions to generate randomness. 


In this link: https://www.shadertoy.com/view/4djSRW 

You can find the hash functions I used. 


Here you can see the result with instancing. 







Here is the vertex shader code used to create it:




Comments

Popular posts from this blog

HW1-Bezier Surfaces

Homework 3: Volumetric Clouds with Ray Marching