HW1-Bezier Surfaces
Greetings and welcome to my blog! I will start the blog with the design choices I made and show some results. After that I will discuss some topics about the implementation.
Design Choices:
Setup Flow:
The program I have written in this homework is pretty simplistic. While writing such programs, I generally do not prefer Object Oriented Design, which I am still not sure I am a big fan of. So, I have used global variables and structs extensively. A bezier curve is represented by a struct that contains all the necessary geometric and OpenGL data. Geometric data consists of UV values required to compute Bernstein Polynomials in the shader, control points, triangulation information, etc. While processing the input file, those data in the struct have been filled. After all the necessary data have been gathered, triangulation, OpenGL buffer creation, and CPU-GPU transfer have been done.
Data in The Shader Side:
Inside the layout locations, I only pass the UV coordinates of the vertices. The rest of the data, such as Control Points, light positions and intensities, etc., are uniform.
Assembling the Bezier Surfaces:
It is given that each 4x4 Control Point set forms a Bezier Surface. All of the given Bezier Surfaces form an assembled surface. This process is similar to tiling a given boundary with tiny building blocks. The rules of the tiling are given in the homework test. At this point, I had two choices. The first was to create one big surface object with its own OpenGL buffers and draw it as a whole or draw tiny Bezier Surfaces independently with correct transformations. The first requires packing up all the data required in GPU with correct offsets and using them in the shader with the correct offsets. As the sample size is known, each of the data field's sizes is also known. By exploiting this property, each subsurface could be resolved in the shader stage. As can be seen, this is a lot of work. Though packing up all the data sending it once, and handling everything with a single call is appealing, the structure of this way is not proper and cumbersome. If I was writing a program with high-performance needs, I could consider this way, but in this homework, I went more simply. I rendered each Bezier Surface independently with correct transformations. The transformations are set while setting up the surface structures. After preparing the needed correct translation and scalings, transformation matrices can easily be set along with the rotation determined by the user.
Some Results:
My program works as expected. Though I encountered with some minor problems they were very minor and easy to fix. You can see some of the results below
Discussions:
CPU-Side vs GPU-Side Surface Computing:
As stated in the homework text, the Bezier Surface computation is done on the vertex shader stage. While computing the surfaces in GPU in parallel is appealing, there is one drawback. A Bezier Surface is solely determined by its Control Points and the sample sizes. In our case, Control Points never change, but sample sizes can change, which can only happen when a user input occurs. Therefore, we must recompute the Bezier Surface only in such an event. However, in every render pass, vertices go over the vertex shader stage, which effectively means computing the Bezier Surface every single frame even though there are no changes. This is inefficient. Instead, we could evaluate the Bezier Surface vertices and send them directly to the layout location once, and in each render pass, we could use them without recomputing. In this way, computation of the Bezier Surface vertices will only happen in the case of user input. While we have this drawback when computing the Bezier Surface in the vertex shader, we can exploit this by making the surface dynamic. As each vertex must go over the vertex shader stage, we can play with the computation in the shader stage to achieve interesting effects by parameterizing the surface with time.
The task given is not a complex task for modern CPUs and GPUs. Thus, I think conducting an FPS experiment will not be insightful. Instead, I will show one of the capabilities previously mentioned, GPU-side computing in each frame can achieve.
You can inspect the shader code that creates these effects:
Note that, the time parameter can be easily passed as a uniform parameter along with a call with glfwGetTime() in each render pass.



Comments
Post a Comment