Last time, I finished off with a scene of floating crates demonstrating some simple diffuse, specular, and emission maps with the help of a very basic point light implementation. With that, I had two main goals in mind:
- Figure out how to support multiple sources of light at once.
- Implement more types of light source.
The next LearnOpenGL chapters on lighting—Light casters and Multiple lights—covered both of those goals nicely, so I continued following them and worked to integrate the results (something which is becoming increasingly challenging as the GL-related parts of the engine codebase diverge more and more from the LearnOpenGL examples!).
First up was the directional light. This type of light mimics distant light sources (e.g. sunlight) by casting light on the whole scene from a given angle; unlike other lights, directional lights don’t have positions.
They do have three properties in common with other lights—i.e. ambient, diffuse, and specular colours—but I opted to create a separate struct for each light.
DirectionalLight looks like this:
To get this working in the scene, the first thing I did was create an entity and attach
DirectionalLight components to it. The first three aren’t used by anything, but my
MeshRenderer system—which is responsible for rendering
Material entities—was also where I was temporarily handling lighting updates, so applying those components to the lights was a quick and dirty way of getting them pushed to
MeshRenderer to hold a reference to the directional light’s entity ID and pass the light data as uniforms to the shader program:
At this point I decided to keep the light I previously implemented, which was essentially a point light (i.e. a light at a specific position emitting in all directions), and see if I could get it working nicely alongside the directional light. My first attempt at this sort of worked, but was definitely not correct! With the data for each light available to the fragment shader, I calculated diffuse and specular factors for each light and factored them into the final diffuse and specular calculations. Take the specular calculations for example:
Here’s how it looked (note the black cube, which represents the directional light, and the white one, which represents the original point light):
It kind of worked, but the point light was significantly weaker despite being so close to one of the crates. Instead of handling it this way, what I needed to do instead was calculate the final
ambient + diffuse + specular per light, sum the results, and add the material’s emission colour to get the final fragment colour. More on this further down.
Before moving on to properly implementing multiple light support, I decided to implement spot lights. Sometimes referred to as omnidirectional lights, they emit light in a cone shape from a specific point, with the angle and several other properties affecting the direction and size of the cone, how soft the edges of the light are, and the attenuation curve (which determines how the light’s intensity diminishes over a distance). Implementations and available parameters seem to vary between engines, but I just followed the example given on LearnOpenGL, starting with some additions to the struct:
I wired this up to
MeshRenderer and the material shader program in the same way as the other lights, then created an entity with a
I then created a separate
SpotLightDemo system to “attach” the spot light entity to the camera by syncing the camera’s
front properties with it:
In order to test it in isolation, I replaced the lighting calculations in the fragment shader with the spot light calculations.
Here’s how it looked in-scene:
Let there be lights
The first step towards rendering the scene with multiple lights was to modify the fragment shader to accept arrays for spot lights and point lights (not directional lights, since there only needs to be one in existence at any given moment). I set a maximum size for the arrays (since we can’t have an infinite number of lights and it’s a good idea to keep the number of active lights below a certain threshold anyway, which I’ll explore later) and also added uniforms for specifying the number of currently active lights for each type:
I then defined functions to take care of lighting calculations, similar to the examples given on LearnOpenGL. Here are their prototypes:
With these in place, I could simply start with a fragment colour (black) and add the result of each light calculation to it:
During the process of getting this to work, I ran into issues figuring out the correct calculations in view space, so I moved back to doing shader calculations in world space—which requires providing the camera position to the fragment shader—to get it working. When I understand the maths better, I’ll come back to this to see if I can get it working in view space.
At this point, I also realised that
MeshRenderer was simply getting too cumbersome and doing too much work, so I turned my existing
SpotLightDemo systems into
SpotLightController respectively, moving all of the lighting data updates into them and removing that burden from
MeshRenderer. This was a much cleaner arrangement, with a total of four systems registered:
However, it presented one problem; both of the light controllers required access to the same shader program as
MeshRenderer, i.e. the
Material shader program, which was until this point being instantiated in
MeshRenderer’s constructor. I had a few solutions in mind for this, but ultimately decided to expand my asset management with a
ShaderProgramRepository (on the basis that shader programs are built from GLSL source files read from disk), which enabled systems (or anything else) to obtain the same instance of a shader program via
Problem solved! After some tweaking of the light data and number of lights, here’s how the scene looked:
One directional light, numerous point lights, and a spot light attached to the camera (although it’s not very noticeable here).
Bonus: GUI integration
Having a scene full of lights with various properties is nice, but changing those properties in code and recompiling every time I want to test or verify something was starting to get cumbersome, so I decided to integrate a GUI library to make things easier. There are lots of GUI options for C++, but I was quick to opt for the popular Dear ImGui—an immediate mode GUI library used in many game projects to provide debug UIs or interfaces for in-house tools. It’s sponsored by some big names such as Blizzard and Ubisoft, and often shows up in behind-the-scenes footage at GDC talks, so I knew it would easily meet my criteria.
I started by grabbing the ImGui headers and source files and updating my CMake config to pull them in. This alone took some trial and error as I realised I forgot a few headers and had to enable imm32, among other things. However, it didn’t take long to get a very basic integration of the ImGui demo working, which entailed calling a series of
ImGui::* functions for initialisation, frame preparation at the start of the main loop, drawing various elements, rendering the collected draw data (just before the frame buffers are swapped), and cleaning up after the main loop.
Here’s how the integration looked:
I was happy with that progress, but before I could make any real use of it, I wanted to clean up the integration and make it more manageable. I spent a while thinking about this, at one point peeking at TheCherno’s Hazel engine (which uses Dear ImGui for its editor interface), and eventually settled on a basic concept consisting of two components:
ImGuiLayer: a container for ImGui elements, responsible for the whole lifecycle of preparing and drawing them. Wraps
ImGuiPanel: a simple interface for implementing ImGui windows (which I opted to refer to as panels to avoid confusion with windows in the OS/GLFW sense). These can be attached to an
State—which was so far only responsible for pairing window instances with scene instances—to include an
ImGuiLayer instance. I then expanded
App to include an
m_debug flag (toggleable with the ‘H’ key) and make the necessary calls to the
ImGuiLayer instances throughout the main loop, which looks like this:
With that groundwork in place, I created two
ImGuiPanel implementations. The first,
InfoPanel, displays frame time, FPS, and some details about the camera. The second,
PointLightControlPanel, exposes a list of point lights by their entity IDs, allowing them to be selected and their
PointLight properties to be edited in real time.
Once the panels were attached to the layer, all that was left to do was play with it:
Much better! This also helped me identify an issue in the fragment shader that was causing lighting to be slightly wrong from some angles, so having a debug UI is already paying off.
This post reflects my biggest chunk of progress on the engine so far, and while working on it, I inevitably thought of more things to add to the To Do list. I’ll likely work on a few of those things next, or I might just push on to the next LearnOpenGL chapter, Model Loading—so I can finally look at (and show off) something other than crates with glowing arcane symbols on them!
The latest code for this project can be found at https://github.com/Riari/iris-engine.