<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Leif Node</title>
	<atom:link href="https://leifnode.com/category/programming/opengl/feed/" rel="self" type="application/rss+xml" />
	<link>https://leifnode.com</link>
	<description>Leif Erkenbrach&#039;s programming blog</description>
	<lastBuildDate>Wed, 22 Jul 2015 23:00:42 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Compute Texturing</title>
		<link>https://leifnode.com/2014/04/compute-texturing/</link>
		<comments>https://leifnode.com/2014/04/compute-texturing/#comments</comments>
		<pubDate>Tue, 22 Apr 2014 14:25:16 +0000</pubDate>
		<dc:creator><![CDATA[Leif Erkenbrach]]></dc:creator>
				<category><![CDATA[OpenGL]]></category>
		<category><![CDATA[Programming]]></category>

		<guid isPermaLink="false">http://leifnode.com/?p=210</guid>
		<description><![CDATA[Originally I was computing the height of terrain per frame which was pretty wasteful since if I wanted more complex noise functions I would not be able to maintain a runnable frame rate. I moved the computation of terrain height to a compute shader which stores the height in a texture. I was also able to compute and store normals in …<p> <a class="continue-reading-link" href="https://leifnode.com/2014/04/compute-texturing/">Continue reading<i class="icon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://leifnode.com/wp-content/uploads/2014/04/Planet_Diffuse-map.jpg"><img class="alignnone size-large wp-image-214" src="http://leifnode.com/wp-content/uploads/2014/04/Planet_Diffuse-map-1024x576.jpg" alt="Planet Diffuse Map" width="920" height="517" /></a></p>
<p>Originally I was computing the height of terrain per frame which was pretty wasteful since if I wanted more complex noise functions I would not be able to maintain a runnable frame rate. I moved the computation of terrain height to a compute shader which stores the height in a texture. I was also able to compute and store normals in the rgb space of the texture.</p>
<p>In order to create a unique texture for each node I create a cache that holds all of the textures that a node may use in a map using an <em>unsigned long</em> long to give each texture a unique ID.</p>
<p>As the tree is traversed downwards, the <em>nextId</em> is passed to the next recursive call and used as the <em>currentId. </em>The QUADRANT_ID is an integer from 0 to 3 which identifies the quadrant that is being split. The ID is added onto using bit shifting using 3 bits to describe each LOD level where the first two bits are used to describe the quadrant and the last bit is used to give each node a unique ID even if it has child nodes.</p>
<p>At some point I will probably switch to just holding a tree structure and splitting/joining nodes every frame in order to remove the need for unique IDs since at the moment the maximum number of levels of detail that can be stored is 21 because <em>long long</em>&#8216;s only have 64 bits to work with.</p>
<p><a href="http://leifnode.com/wp-content/uploads/2014/04/Planet_Height-map.jpg"><img class="alignnone wp-image-215 size-large" src="http://leifnode.com/wp-content/uploads/2014/04/Planet_Height-map-1024x576.jpg" alt="Planet_Height map" width="920" height="517" /></a></p>
<p>The terrain height map is generated using the same fractional Brownian motion function that I described in my earlier <a title="Procedural Fractal Terrain" href="http://leifnode.com/2014/04/procedural-fractal-terrain/">post</a>. The calculated height is stored in a 2D shared float array in the compute shader which is the size of each group. Prior to the normal calculation, the height value is stored in the shared float array named <em>workGroupHeight</em>.</p>
<pre class="brush: cpp; title: ; notranslate">
workGroupHeight[gl_LocalInvocationID.x][gl_LocalInvocationID.y] = heightValue;
</pre>
<p>&nbsp;</p>
<p><a href="http://leifnode.com/wp-content/uploads/2014/04/Planet_Normal-map.jpg"><img class="alignnone size-large wp-image-216" src="http://leifnode.com/wp-content/uploads/2014/04/Planet_Normal-map-1024x576.jpg" alt="Planet_Normal map" width="920" height="517" /></a></p>
<p>The normal calculation in this step is done in a second pass on the compute shader that generates out the height map. Once the height values are calculated and saved to a temporary 2D array of floats I call the GLSL function <em>barrier().</em> Since GPUs are highly parallel, the processing for each pixel of the height map is separated into its own thread so it is possible for some threads to finish sooner than others. This is problematic since the normal computation step requires knowledge of the neighboring pixels&#8217; height and if other threads query the neighboring pixel values before they have been computed in other threads then you can get sections of pixels that do not have correct normals. The call of the function <em>barrier()</em> makes all of the threads wait once they reach the barrier until every other thread has finished, then the program continues in parallel. I was not calling the correct barrier function early on, and instead used one of the memory barriers which resulted in a bar code pattern of normals that faced toward the center of the planet and normals that were correctly computed.</p>
<p>I calculate the normals by using the cross product of the point that the shader is currently working on and the vectors from this point to its two neighbors at (x+1, y) and (x, y+1) on the texture. Once the normals have been calculated I save the final normal and height values to an <em>rgba32f</em> texture using the GLSL function <em>imageStore()</em>.</p>
<p>The normals that I generate in this step are stored in object space since this texture will only be used on this planet in this orientation. This has the advantage that I don&#8217;t need to work with tangent space and can just directly read the normal from the texture&#8217;s rgb, move the value stored in the texture to -1.0 &#8211; 1.0 from the 0.0-1.0 value stored in the texture, and use that resulting vector.</p>
]]></content:encoded>
			<wfw:commentRss>https://leifnode.com/2014/04/compute-texturing/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Planetary Scale LOD Terrain Generation</title>
		<link>https://leifnode.com/2014/04/planetary-scale-lod-terrain-generation/</link>
		<comments>https://leifnode.com/2014/04/planetary-scale-lod-terrain-generation/#comments</comments>
		<pubDate>Tue, 22 Apr 2014 12:51:07 +0000</pubDate>
		<dc:creator><![CDATA[Leif Erkenbrach]]></dc:creator>
				<category><![CDATA[OpenGL]]></category>
		<category><![CDATA[Programming]]></category>

		<guid isPermaLink="false">http://leifnode.com/?p=197</guid>
		<description><![CDATA[In the week following my procedural terrain I added a dynamic quadtree-based LOD system in order to be capable of rendering large scale terrain at distances ranging from several kilometers to a few meters. At this point I ran into the issue that floating point is not accurate enough to accomplish centimeter precision at distances of several hundred thousand meters …<p> <a class="continue-reading-link" href="https://leifnode.com/2014/04/planetary-scale-lod-terrain-generation/">Continue reading<i class="icon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://leifnode.com/wp-content/uploads/2014/04/Planet-LOD.jpg"><img class="alignnone size-large wp-image-236" src="http://leifnode.com/wp-content/uploads/2014/04/Planet-LOD-1024x576.jpg" alt="Planet-LOD" width="920" height="517" /></a></p>
<p>In the week following my procedural terrain I added a dynamic quadtree-based LOD system in order to be capable of rendering large scale terrain at distances ranging from several kilometers to a few meters.</p>
<p>At this point I ran into the issue that floating point is not accurate enough to accomplish centimeter precision at distances of several hundred thousand meters from 0.0, 0.0, 0.0. I still need to move the height calculations into view space so that they do not lose accuracy.</p>
<p>In order to determine what level of detail I need to use for a given section of terrain I precalculate a list of distances that each LOD level will be within. I do this by starting with the distance that I want to be at before I can see the most detailed tiles. I normally have this base distance at around 50 meters. Then for every level of detail up to the maximum depth that I set, I double this initial value to 100, 200, 400 and so on. Then from the root node I check if the center of the plane is within 50*2^(max level of detail) distance from the camera. If it is then I split the tree into four subsections and check if their center&#8217;s distance to the camera is less than 50*2^(max level of detail &#8211; 1),  50*2^(max level of detail &#8211; 2), &#8230; , 50*2^(max level of detail &#8211; n) until I have checked the entirety of the tree. This is done every frame and all tiles that are in range get stored in a vector that includes the level of detail that the tile is at, the scale of the tile which is equivalent to 0.5^(node depth), and the offset of the tile which is summed together as the tree is traversed downwards.</p>
<p>I generate a single plane which I draw multiple times with different parameters in order to cover the entire quad tree. This plane has four separate index buffers that specify the indices for four subsections of the mesh that constitute its four quadrants. If a quadrant is within range of the player then it gets split into another node and the indices that constitute that area of the parent node are skipped during drawing.<br />
<img class="alignnone wp-image-230 " src="http://leifnode.com/wp-content/uploads/2014/04/QuadTreeLOD.jpg" alt="QuadTreeLOD" width="634" height="443" /></p>
<p>Once the tree has been traversed I draw the mesh for each tile stored in the list of visible tiles. The shaders that draw this take in the offset and scale to transform the plane to fit into its section. After the plane is scaled and offset I project each point onto a sphere using the equation <a href="http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html">here</a>. Each point is then used as input in a fractional Brownian motion function that uses 3D simplex noise in order to determine the height of the terrain at that point on the unit sphere.</p>
<p>All of the meshes are generated per frame and the program runs at ~200 fps on my GTX 480.<br />
<iframe src="//www.youtube.com/embed/24QWBuIInOk?rel=0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
]]></content:encoded>
			<wfw:commentRss>https://leifnode.com/2014/04/planetary-scale-lod-terrain-generation/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Procedural Fractal Terrain</title>
		<link>https://leifnode.com/2014/04/procedural-fractal-terrain/</link>
		<comments>https://leifnode.com/2014/04/procedural-fractal-terrain/#comments</comments>
		<pubDate>Tue, 22 Apr 2014 10:34:14 +0000</pubDate>
		<dc:creator><![CDATA[Leif Erkenbrach]]></dc:creator>
				<category><![CDATA[OpenGL]]></category>
		<category><![CDATA[Programming]]></category>

		<guid isPermaLink="false">http://leifnode.com/?p=189</guid>
		<description><![CDATA[After doing atmospheric scattering I though that it would be an interesting challenge to make a full scale planet renderer. This is what I started with when looking to generate the terrain. The terrain height and color is determined through a height map that I generate in the vertex and fragment shaders. I generated the height map using the 2D …<p> <a class="continue-reading-link" href="https://leifnode.com/2014/04/procedural-fractal-terrain/">Continue reading<i class="icon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://leifnode.com/wp-content/uploads/2014/04/Procedural-Terrain.jpg"><img class="alignnone size-large wp-image-191" src="http://leifnode.com/wp-content/uploads/2014/04/Procedural-Terrain-1024x548.jpg" alt="Procedural Terrain" width="920" height="492" /></a></p>
<p>After doing atmospheric scattering I though that it would be an interesting challenge to make a full scale planet renderer. This is what I started with when looking to generate the terrain. The terrain height and color is determined through a height map that I generate in the vertex and fragment shaders.</p>
<div id="attachment_193" style="width: 310px" class="wp-caption alignnone"><a href="http://leifnode.com/wp-content/uploads/2014/04/Procedural-Heightmap.jpg"><img class="size-medium wp-image-193" src="http://leifnode.com/wp-content/uploads/2014/04/Procedural-Heightmap-300x160.jpg" alt="Procedural Heightmap" width="300" height="160" /></a><p class="wp-caption-text">The height map for another piece of terrain</p></div>
<p>I generated the height map using the 2D simplex noise implementation from <a href="https://github.com/ashima/webgl-noise">webgl-noise </a>implemented in the vertex and fragment shaders. Simplex noise is similar to Perlin noise in how it looks visually and gives a slight performance increase when using 2D and 3D noise functions. Alone, simplex noise would make for pretty boring terrain. However, if you add two or more functions of simplex noise together you start to get some more detail. This is called Fractional Brownian Motion, and is what I have used to create these fairly detailed height maps.</p>
<p>This what the factional Brownian motion function that I use looks like.</p>
<pre class="brush: cpp; title: ; notranslate">
float fbm(vec3 x, float initialFrequency, float lacunarity, float gain, int octaves)
{
	float total = 0.0f;
	float frequency = initialFrequency;
	float amplitude = gain;

	for (int i = 0; i &lt; octaves; ++i)
	{
		total += simplexNoise(x * frequency) * amplitude;
		frequency *= lacunarity;
		amplitude *= gain;
	}

	return total;
}
</pre>
]]></content:encoded>
			<wfw:commentRss>https://leifnode.com/2014/04/procedural-fractal-terrain/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Atmospheric Scattering, Skybox, and Logarithmic Depth Buffering</title>
		<link>https://leifnode.com/2014/04/atmospheric-scattering-skybox-and-logarithmic-depth-buffering/</link>
		<comments>https://leifnode.com/2014/04/atmospheric-scattering-skybox-and-logarithmic-depth-buffering/#comments</comments>
		<pubDate>Tue, 22 Apr 2014 10:06:44 +0000</pubDate>
		<dc:creator><![CDATA[Leif Erkenbrach]]></dc:creator>
				<category><![CDATA[OpenGL]]></category>
		<category><![CDATA[Programming]]></category>

		<guid isPermaLink="false">http://leifnode.com/?p=185</guid>
		<description><![CDATA[A few months ago I decided to implement atmospheric scattering in my OpenGL renderer in order to create a day/night cycle. This turns out to be a fairly cheap effect to calculate if you do not need it to be completely accurate of implement multiscattering. One of the early GPU gems books had an implementation of atmospheric scattering that creates …<p> <a class="continue-reading-link" href="https://leifnode.com/2014/04/atmospheric-scattering-skybox-and-logarithmic-depth-buffering/">Continue reading<i class="icon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<p><a href="http://leifnode.com/wp-content/uploads/2014/04/Atmospheric-Scattering.jpg"><img class="alignnone size-large wp-image-186" src="http://leifnode.com/wp-content/uploads/2014/04/Atmospheric-Scattering-1024x576.jpg" alt="Atmospheric Scattering" width="920" height="517" /></a></p>
<p>A few months ago I decided to implement atmospheric scattering in my OpenGL renderer in order to create a day/night cycle.</p>
<p>This turns out to be a fairly cheap effect to calculate if you do not need it to be completely accurate of implement multiscattering. One of the early GPU gems books had an implementation of atmospheric scattering that creates an accurate enough representation. The book is now online and the article can be found <a href="http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html">here</a>.</p>
<p>This article comes with sample shader code for the implementation, but this ended up raising some larger issues in regards to the depth buffer. In order to keep the atmosphere to scale I wanted to render a planetary scale sphere. In order to do this I increased the distance of the far clipping plane. Since the near plane of the projection frustum is situated at 1 meter away from the camera and the far plane is 300,000km away pretty much everything in the frustum was flickering in and out of view. Instead of using a linear depth buffer like I was, I switched to using a logarithmic depth buffer which allows for a far greater range by concentrating most of the accuracy of the depth buffer closer to the near clipping plane. I used the vertex shader implementation demonstrated on the <a href="http://outerra.blogspot.com/2012/11/maximizing-depth-buffer-range-and.html">Outerra Blog</a> and this fixed the flickering issues that I was having.</p>
<p>Finally in order to make it seem like a planet in space, I needed to implement a space sky box. The most trouble that I had with this was locating the tools to create the cube map that I would use for the sky box texture. I ended up finding a great tool named <a href="http://sourceforge.net/projects/spacescape/">Spacescape</a> that I could use to design space sky boxes (although I ended using one of the included samples). In order to get the generated .png files into a cube map I used <a href="http://developer.amd.com/tools-and-sdks/archive/legacy-cpu-gpu-tools/cubemapgen/">ATI CubeMapGen</a>. This allowed me to conveniently load in each .png and place it in its correct position, then export the cube map as a .dds file. I did not have much trouble from there since I had already loaded .dds files in previous work I had done. I just needed to render a cube with inverse winding order without any translation based on the view.</p>
<p><iframe src="//www.youtube.com/embed/jEDb8CCR3aE?rel=0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
]]></content:encoded>
			<wfw:commentRss>https://leifnode.com/2014/04/atmospheric-scattering-skybox-and-logarithmic-depth-buffering/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
