All posts by Joey

Update 2017/01/29

Over the last couple weeks, I’ve been adding several workflow improvements to Shader Foundry. Now that these improvements have been completed, I figured that it would be a good idea to take this opportunity to post some details about our shader export tool.

There is a new page in the Tools section that gives a simple overview: Tools / Shader Foundry.

I also posted an article discussing the history and improvements: Shader Foundry.

Shader Foundry

Over the last couple weeks, I’ve been adding several workflow improvements to Shader Foundry. Now that these improvements have been completed, I figured that it would be a good idea to take this opportunity to post some details about our shader export tool.

In short, Shader Foundry manages our shader pipeline. It serves as a simple GUI for Microsoft’s shader compiler (FXC) and provides visibility for the state of each shader asset.

We chose a very minimalistic approach for our shader asset pipeline: shaders are compiled by fxc.exe and the output is directly used by the engine. There is no additional processing involved.

Also, Sauce does not make use of the Effects framework; instead, we use individual shader source files explicitly written for one of the six shader types supported by D3D11:

  • Compute Shaders
  • Domain Shaders
  • Geometry Shaders
  • Hull Shaders
  • Pixel Shaders
  • Vertex Shaders

History

In the early days of implementing our Graphics library (now officially referred to as Graphics 1.0), Alex created a simple tool to scan our Art/ directory and compile the shaders.

Some months later, it turned out that we needed a similar interface for our mesh exporter. At that point, we decided to create a unified Pipeline tool that incorporated both the shader compiler and the mesh exporter, with the intention that as new asset types came online, we would integrate them into the established framework.

The Pipeline user interface featured a file tree, an asset grid, and a single tool bar with buttons to refresh, export, and clean assets. As you might expect, this unification of the UI also included a unification of the data model.

When I started working on Graphics 2.0 (Spring 2016), I took the opportunity to reverse course on the unified pipeline framework. After a couple years of fairly regular use, it had become clear that while the Pipeline tool did make strides in the right direction in terms of its user interface, there were many shader export specific improvements that were being stifled by the unification requirement.

Furthermore, I had the feeling that the Pipeline tool had come to a crossroads: either continue to expand and eventually become a full-blown asset editor, or split into smaller, domain focused tools. It’s probably obvious at this point that I chose the latter. My reasoning was that I definitely didn’t need an all-encompassing asset suite, especially when I was just starting a complete overhaul my rendering engine. It would have become more of a distraction from what I was primarily trying to accomplish: Graphics 2.0. I was looking for less speed bumps and detours, not more.

An Old Idea Made New

Shader Foundry started off as a stripped-down version of the Pipeline tool. The directory tree with a corresponding asset grid was a great foundation for the user interface. From there it grew to support all six shader types (Graphics 1.0 only supported vertex and pixel shaders, so the tools followed likewise).

Filters Bar

To be honest, the explosion in the number of shader types was another argument in favor of extracting the shader exporter back into a standalone tool. Since the Pipeline tool was designed to operate on any asset type (shader, mesh, animation, …), it required filters for each of those types to cater to the common workflow.

However, there was no way to filter on any of the asset “sub-types” (ex: vertex shaders vs pixel shaders). I experimented with ways to do this in the unified framework, but they were all complex and felt over-engineered. As such, when I extracted the UI from the Pipeline tool, the “Filters” bar was repopulated with toggle buttons for each of the different shader types.

File Monitoring

In the Pipeline tool, if you made a change to a source file or perhaps deleted an asset file, you needed to manually refresh the interface to reevaluate the files and update their status. That’s what that big “Refresh” button was all about.

One of my goals in Shader Foundry was to remove workflow friction. In this case, I was able to completely eliminate the “Refresh” button altogether. Instead, the tool actively monitors the shader source files and their dependencies. If a shader source file is changed, or a corresponding asset file is removed, it immediately flags the asset as “out of date”.

I should point out that the shaders used for Graphics 1.0 were pretty simple and all self-contained. However, the shaders in Graphics 2.0 include other common source code which, in turn, may also include other common source code. Consequently, if a common include file is modified, the status for all dependent shaders is updated accordingly.

As you might imagine, this facilitates a much better user experience than mechanically having to click the “Refresh” button after any change to shaders or common include files.

Asset Details

Another issue with the workflow in the Pipeline tool was the additional layer of indirection required to investigate shader compilation errors. If there was an error, even though the status would get changed immediately, the only way to inspect the error was to click a button to open a dialog with the FXC output. Doing this once in a while is mildly annoying… doing this as a means of shader debugging is downright painful.

In Shader Foundry, I moved the Asset Details panel from a separate dialog and placed it below the Asset Grid. This way, the user can select an entry in the grid and the Asset Details can immediately display the corresponding data. While it might not seem like much of a change, it actually makes a huge difference in iteration time.

Settings Dialog

The Settings dialog for the Pipeline tool was embarrassingly bare. There was a General tab where the Root Directory was set, and then individual tabs for the shader exporter and the mesh exporter, each with a single path loader control for their corresponding external executables.

When it came to setting up Shader Foundry, I knew that I wanted to expose a few of the FXC options. I decided to model the Settings dialog on the Visual Studio Options dialog. In this case, I built a base Master-Detail control that can be reused for other tools.

About Dialog

Lastly, I added an About dialog. This was really just for fun, and it turned out to be really cool!

The main idea was to create an About dialog that was reusable for all of the tools. The image and the data all needed to come from the application with minimal setup. As such, I created an About form based on the AboutBox template, which uses the AssemblyInfo to populate the text. The image is extracted from the executable icon and displayed inside a field of gray. I think it looks pretty sharp.

Final Thoughts

I’m sure there will be plenty more to do down the road as I continue to use it, but I think that Shader Foundry is a much improved version over the Pipeline shader exporter. That said, here’s a couple of points to takeaway from this venture.

Let Experience Drive: Don’t be afraid to reverse course on a decision — you are not starting from scratch even if you decide to start over. Take heed of the lessons you’ve learned from the current version and make something better.

Complexity Breeds Complexity: Steer clear of piling on layers of architecture to force one set of things to work along side another in the name of a “unified” framework. Architecture requires time to put into place and converts assumptions into pillars. Whenever possible, keep the architecture lean so that you can be responsive to change.

Update 2016/10/22

Over the past couple weeks, I’ve been working to update Sauce to take advantage of a couple of the features of “modern C++”. While the time investment has been pretty significant, I feel that the new features are substantial improvements.

Scoped Enums

First, I converted all of our enums to be scoped enums (introduced in C++11). This was more time consuming than I had expected, but well worth the effort.

I wrote up a detailed account in a separate post: Scoped Enums.

Replace NULL with nullptr

Also, I replaced all instances of NULL with nullptr (introduced in C++11).

Similar to many other C++ code bases, our forced include file Core.h defined NULL to 0. Unfortunately, since this is simply an integer and not an actual null pointer, using NULL can hide bugs. In fact, during the transition to nullptr, I found a few silent issues lurking in our code base.

Further discussion can be found in this post: Nullptr.

Nullptr

This last week, I spent a couple days converting all of the uses of NULL in Sauce to nullptr. As such, I wanted to take this opportunity to jot down some notes on what the differences are and why I believe that it is worth your time to convert your own code to use nullptr if you haven’t already.

NULL

Similar to many other C++ code bases, Sauce used the following code to define NULL:

#if !defined(NULL)
   #define NULL	0
#endif

While it is true that many commercial software packages and games have shipped with this definition of NULL, it is still problematic.

The reason why is that NULL simply an integer, not an actual null pointer. The danger stems from the fact that other constructs can be implicitly converted to and from integers. This means that using NULL can hide bugs.

In fact, I’ll be the first to admit that during the transition to nullptr, I found a few of these types of conversion errors in Sauce. Sure, the code still compiled and ran — but it was a bit disheartening to find them nonetheless.

Nullptr

Unlike NULL, nullptr is a keyword (available starting in C++11). It is defined to be implicitly converted to any pointer type, but cannot be implicitly converted to an integer. This allows the compiler to recognize the sort of type mismatches we hope that it would.

Let’s walk through an example to see the difference.

An Example

The problem we will explore in this example is the fact that booleans can be compared with NULL without a compiler warning. This is because the C++ standard declares that there is an implicit conversion from bool to int.

// signature:
Result* DoSomething();

// client code:
if (DoSomething() == NULL)
   printf("NULL\n");
else
   printf("NOT NULL\n");

Although this example is a bit contrived, the danger it exemplifies is real.

What happens if we change the return type of DoSomething() from Result* to bool? A change like this is certainly not unheard of — instead of using a full object, maybe we feel like we can reduce the result to a simple boolean.

// signature:
bool DoSomething();

// client code:
if (DoSomething() == NULL)
   printf("NULL\n");
else
   printf("NOT NULL\n");

The code still compiles — no additional warnings (even on Warning level 4!). That seems wrong … checking if a boolean is equal to null pointer is nonsensical and the compiler should bark when it comes across code like this, right?

Unfortunately, the compiler can’t detect the issue because we are using a #define as a stand-in for a null pointer. Remember, it’s not really a null pointer, it’s just the same value that a null pointer evaluates to: 0. Therefore, we shouldn’t be surprised when corner cases like this result in unexpected behavior.

So what happens if we replace NULL with the nullptr keyword?

// signature:
bool DoSomething();

// client code:
if (DoSomething() == nullptr)
   printf("NULL\n");
else
   printf("NOT NULL\n");

Now the compiler will generate an error stating that there is no conversion from 'nullptr' to 'int'. This is much better. We know that there is a type mismatch in the comparison, and we can repair the issue.

Final Thoughts

Simply put, the nullptr keyword is a true null pointer, while NULL is not.

When I was first deciding whether I was going to undertake the conversion task, I felt a bit overwhelmed at the number of changes that I would have to make (at the time there were over 5000 instances of NULL in Sauce). However, as I mentioned earlier, had I not made the transition to nullptr, those silent implicit conversion bugs would surely still be there. As such, I feel that Sauce is far better off with nullptr.

Scoped Enums

For the most part, I really like the C++ language. That said, I also have a small list of things that I wish had been done differently. For years, enumerations have been at the top of that list. Enums have a couple of distinct problems that make them troublesome, and while there are techniques to mitigate some of their issues, they still remain fundamentally flawed.

Thankfully, C++11 added scoped enums (or “strongly-typed” enums), which address these problems head-on. In my opinion, the best part about scoped enums is that the new syntax is intuitive and feels natural to the C++ language.

In an effort to build a case for why scoped enums are superior, we will first discuss the aforementioned deficiencies of their unscoped counterparts. Throughout this discussion we will also outline how we addressed some of these concerns in Sauce. Afterward, we will explore scoped enums and the task of transitioning Sauce to use them.

Terminology

Before we begin, let’s briefly establish some terminology. An unscoped enum has the following form:

enum IDENTIFIER
{
   ENUMERATOR,
   ENUMERATOR,
   ENUMERATOR,
};

The identifier is also referred to as the “type” of the enum. The list inside the enum is composed of enumerators. Each enumerator has an integral value.

Problem 1: Enumerators are treated as integers inside the parent scope.

Aliased Values

Consider the case where you have two enums inside the same parent scope. Unfortunately, there is no reinforcement by the compiler to say that a given enumerator is associated with one enum over the other. This can cause a couple issues. Here’s an example:

namespace Example1
{
   enum Shape
   {
      eSphere,
      eBox,
      eCone,
   };

   enum Material
   {
      eColor,
      eTexture,
   };
}

Now let’s see what happens when we try to use these enums in some client code:

const Example1::Shape shape = Example1::eSphere;
if (shape == Example1::eSphere)
   printf("SPHERE\n");
if (shape == Example1::eBox)
   printf("BOX\n");
if (shape == Example1::eCone)
   printf("CONE\n");

if (shape == Example1::eColor)
   printf("COLOR\n");
if (shape == Example1::eTexture)
   printf("TEXTURE\n");

The code above prints out both “SPHERE” and “COLOR”. This is because unscoped enum enumerators are implicitly converted to integers and the value of shape is 0, which matches both eSphere and eColor.

Sadly, the only workable solution is to manually assign a value to each of the enumerators that is unique within the parent scope. This is far from ideal due to the added maintenance cost.

Enumerator Name Clashes

Additionally, there is a second issue that arises from the fact that enums are swallowed into their parent scope: enumerator name clashes. For instance, consider modifying the previous case to add an “invalid” enumerator to each enum. While this makes sense conceptually, the following code will not compile:

namespace Example2A
{
   enum Shape
   {
      eInvalid,
      eSphere,
      eBox,
      eCone,
   };

   enum Material
   {
      eInvalid,
      eColor,
      eTexture,
   };
}

Although enumerator name clashes are not too common, it is generally bad practice to establish coding conventions that depend on the rarity of such situations.

Consequently, this usually forces you to mangle the enumerator names to include the enum type. Modifying the previous example might look something like this:

namespace Example2B
{
   enum Shape
   {
      eShape_Invalid,
      eShape_Sphere,
      eShape_Box,
      eShape_Cone,
   };

   enum Material
   {
      eMaterial_Invalid,
      eMaterial_Color,
      eMaterial_Texture,
   };
}

This version of the code will compile, but now the enumerator names look a little weird. Also, it is important to point out that we are now repeating ourselves: the enum identifier and each of the enumerators.

Another way to solve the name clash issue is to wrap the enum with an additional scoping object: namespace, class, or struct. Employing this method will allow us to keep our original enumerator names, which I like. However, it actually introduces a new problem: now we need two names… one for the scope and one for the enum itself.

Admittedly, there are a few different ways to handle this, but for the sake of the example let’s keep things simple:

namespace Example2C
{
   namespace Shape
   {
      enum Enum
      {
         eInvalid,
         eSphere,
         eBox,
         eCone,
      };
   };

   namespace Material
   {
      enum Enum
      {
         eInvalid,
         eColor,
         eTexture,
      };
   };
}

While the extra nesting does make the declaration a bit ugly, it solves the enumerator name clash problem. Furthermore, it also forces client code to prefix enumerators with their associated scoping object, which I personally consider a big win.

// in some Example2C function...

const Shape::Enum shape = GetShape();
if (shape == Shape::eInvalid)
   printf("Shape::Invalid\n");
if (shape == Shape::eSphere)
   printf("Shape::Sphere\n");
if (shape == Shape::eBox)
   printf("Shape::Box\n");
if (shape == Shape::eCone)
   printf("Shape::Cone\n");

const Material::Enum material = GetMaterial();
if (material == Material::eInvalid)
   printf("Material::Invalid\n");
if (material == Material::eColor)
   printf("Material::Color\n");
if (material == Material::eTexture)
   printf("Material::Texture\n");

In fact, before the transition to scoped enums, most of the enums in Sauce were scoped this way. Unfortunately, the availability of choices in situations like this breed inconsistency. Sauce was no exception: namespace, class, and struct were all being employed as scoping objects for enums in different parts of the code base (needless to say, I was pretty disappointed in this discovery).

Problem 2: Unscoped Enums cannot be forward declared.

This bothers me a lot. I’m very meticulous with my forward declarations and header includes, but unscoped enums have, at times, undermined my efforts. I also feel like it subverts the C++ mantra of not paying for what you don’t use.

For instance, if you want to use an enum as a function parameter, the full enum definition must be available, requiring a header include if you don’t already have it.

The following is a stripped-down example of the case in point:

Shape.h

namespace Shape
{
   enum Enum
   {
      eInvalid,
      eSphere,
      eBox,
      eCone,
   };
}

ShapeOps.h

#include "Shape.h"    // <-- BOO!

namespace ShapeOps
{
   const char* GetName(const Shape::Enum shape);
}

Unfortunately, there is no way around using a full include with unscoped enums. The situation is even more costly if the enum is inside a class header file that has its own set of includes.

Scoped Enums

Scoped enums were introduced in C++11. I am excited to report that not only do they solve all of the issues discussed above, but they also provide the client code with clean, intuitive syntax.

A scoped enum has the following form:

enum class IDENTIFIER
{
   ENUMERATOR,
   ENUMERATOR,
   ENUMERATOR,
};

That’s right — all you have to do is add the class keyword after enum and you have a scoped enum!

Converting the final example from the last section to use a scoped enum looks like the following:

Shape.h

enum class Shape
{
   eInvalid,
   eSphere,
   eBox,
   eCone,
};

ShapeOps.h

enum class Shape;   // forward declaration -- YAY

namespace ShapeOps
{
   const char* GetName(const Shape shape);
}

Here is an example of client code:

const Shape shape = GetShape();
if (shape == Shape::eInvalid)
   printf("Shape::Invalid\n");
if (shape == Shape::eSphere)
   printf("Shape::Sphere\n");
if (shape == Shape::eBox)
   printf("Shape::Box\n");
if (shape == Shape::eCone)
   printf("Shape::Cone\n");

This is exactly what we were looking for all along!

Another advantage of scoped enums is that they cannot be implicitly converted to integers. This solves the enumerator value aliasing we described earlier and is enforced by the compiler.

Transitioning to Scoped Enums

Sauce is a fairly large code base: ~200K lines of code at the time of this writing. It took me a few days to convert 100+ unscoped enums to scoped enums. Due to the fact that I was manually scoping all of the enums, this is not a simple “search and replace” task. Additionally, I spent the extra time replacing includes with forward declarations, when appropriate.

Overall, I strongly believe that the time investment is well worth the time spent. The scoped enum syntax is natural, and the fact that they can be forward declared opens an opportunity to drop your header include count in some places. If you are considering the task of transitioning your legacy code base to scoped enums, I highly recommend it!

Update 2016/01/31

During the past several months, I’ve been working on overhauling our UI system. As a final step, I wrote a new article comparing the old Interface library (UI 1.0) and the new system (UI 2.0): User Interface 2.0.

UI 2.0 Library Diagram

This redesign took pretty much all of my coding time since the previous update. Next up on the list is some cleanup and then on to Animation!

UI 2.0

I just finished a complete overhaul of our User Interface system. This is a big deal because the Interface library has been an integral component in our engine, not to mention the fact that it was the product of many months of work.

Now that the redesign is complete, I wanted to take this opportunity to outline some of the decisions that were made and discuss why the redesign was needed in the first place. To keep things straight, throughout this article I will refer to the old system as “UI 1.0”, and the new system as “UI 2.0”.

A Brief Overview of UI 1.0

UI 1.0 was encapsulated in a single library called Interface. It was one of our largest libraries due to the number of controls it implemented:

  • Panel
  • Label
  • Button
  • CheckBox
  • ToggleButton
  • RadioButton
  • PictureBox
  • SceneView
  • Selector
  • TabControl
  • TextBox
  • NumericUpDown
  • ScrollBar (Horizontal and Vertical)
  • TrackBar (Horizontal and Vertical)
  • ScrollPanel
  • TableLayoutPanel

The Interface library was based on the set of controls I had created for our XNA codebase a couple years prior. For the most part, I was able to directly port the behavior logic into Sauce; however, for the visuals, our requirements were quite different. In particular, we wanted to support different variations of the controls: a basic one for our testers as well as one for each of the game projects.

To satisfy the requirement of visual variation, UI 1.0 was built around the concept of Styles: each Control (Panel, Button, CheckBox, etc.) had a corresponding Style (PanelStyle, ButtonStyle, CheckBoxStyle, respectively). Each control Style was an abstract interface which was implemented by our different variations.

An example for Button:

UI 1.0 Button Diagram

In Model-View-Controller terms, the Control classes encapsulated the Model and Controller components, while the Style was the View. This made sense because regardless of how it looked, the behavior of a Control remained unchanged. So the idea was that Controls could be written once, and custom Styles could be derived from a corresponding abstract base class.

In practice, a Style pointer would be assigned on each Control, which simply forwarded on the task of rendering to the Style (if it had been assigned). Also, Styles were designed to be shared across multiple Control instances. This way, we could adjust a Style and all the associated Controls would instantly be updated.

Design Flaws

While we had been able to use the Interface library with this feature set for the good part of the last two years, unfortunately, there were a couple of fundamental problems with its architecture that prevented us from doing some essential things.

First, it turned out that most (if not all) of the data members from the control were needed to render its visual representation. This spiraled into a mess where some Style routines required nearly ten parameters each.

Not only was this painful to work with, but it also gave rise to another problem. Some derived Styles required certain data while others did not; however, the only way to access the “extra” data was to add it to the parameter list(s) in the Style base class. This led to several verbose and sometimes unintuitive interfaces.

Second, as mentioned earlier, Styles were designed to be shared. While this meant that we needed less objects, it also meant we were unable store any state for the purposes of rendering. In other words, Controls were required to house all of the state data. This broke two things: 1) a Control now needed to keep visual data when it was intended to only be the model and controller; and 2) new Controls needed to be written to support different views — which was the exact problem the Styles architecture was attempting to solve in the first place.

For example, there was no way to create a Button with a glow that would pulse. Since all Buttons were tied to the same Style, glow state data would need to be stored in the Button — but not all Buttons need a glow state!

Eventually, I realized that both of the design flaws actually stem from the same issue: MVC declares that the components should be separate (read: independent), but that should not be confused with restricted access. To be effective, the visual component needs access to all of the data about the Control, as well as have its own state data.

Introducing UI 2.0

Considering the issues outlined above were architectural, I knew that the UI system would have to be redesigned. There was no doubt that this was going to be a huge undertaking, so I decided to make a prioritized list of goals:

  1. Remove Styles and migrate to system where Controls are responsible for display
  2. Design for Composite Controls
  3. Implement a real Scrollable Area Control
  4. Support for Nine-Patch based Controls
  5. Improve rending performance
  6. Animation support

I eventually tabled the last two since they require some groundwork to be completed in our Graphics system before they can implemented. Perhaps they will be at the top of the list for UI 3.0…

Redesigned Control Hierarchy

For 2.0, I decided to create three separate libraries:

  • Ui: contains the abstract base classes for standard controls.
  • BasicUi: contains an implementation of Ui controls, using simple borders and backgrounds.
  • FlexUi: contains an implementation of Ui controls, using Nine-Patch for visuals.

UI 2.0 Library Diagram

Each Control became an abstract base class, establishing the interface and handling the behavior logic. At the same time, Styles were removed and their functionality was extracted into respective derived classes.

Composite Controls

Trying to create composite Controls in UI 1.0 was really painful. Styles would have to be passed down through the Control interface. This cemented the sub-controls utilized by the composite, stripping away flexibility.

In UI 2.0, I wanted to be able to use sub-controls without them being baked into the interface of the Control; in other words, I wanted them to be implementation details, which is what that they actually are.

NumericUpDown Controls

My primary test case for Composite support was the NumericUpDown. In UI 1.0, the NumericUpDown had two buttons (+/-), but the value display was just static text, which could neither be edited nor copied. I really wanted to replace the static text with a TextBox control, but the Style framework was posing as an obstacle instead of a means to a solution.

By implementing the Controls as a hierarchy with an abstract base class, creating Composites fell into place naturally. This was a pleasant and most welcomed surprise, especially after working with the mess in 1.0

The only difficulty I found was in determining where to place the sub-controls. In the NumericUpDown, I used virtual functions to instantiate derived versions of the TextBox and Buttons, and then used their abstract interfaces in the update logic. While this works just fine, it feels a bit inside-out. As I mentioned above, I came to the conclusion that sub-controls should be implementation details. To stand true to this statement, the TextBox and Buttons should really be created and updated in the derived controls instead of in the abstract base class. However, structuring Composites in that way also means that there is bound to be a decent amount of duplication of the update logic code in each of the derived controls. So at this point, I’m ambivalent as to which design is superior.

Scrollable Panel

UI 1.0 included a proof of concept implementation of a scrollable panel: ScrollPanel. Unfortunately, I quickly realized during development that there was just no way to add child Controls to the scroll canvas.

Scrollable Panels are a must-have feature for our target project, so this had to be addressed.

ScrollPanel

In UI 1.0, a ControlManager class handled all the rendering and intersection traversal through recursion. This was possible because the Control base class had a list of child controls that the ControlManager could access and manage the flow. As such, Control implementations were very simple since they were only responsible for rendering themselves. However, this setup was far too rigid and did not allow for Controls to render children within their Render() function.

For UI 2.0, I decided that each Control would have to be responsible for intersecting and rendering their child controls. While this places a lot more of a burden on each Control, it enables us to implement a virtual canvas for the ScrollPanel’s child controls. In practice, I found this structure to be a bit more intuitive than the former, since there is less code “hiding” in the base Control class implementation. It also made the base Control class a lot more lightweight, which is always a good thing.

Building a scrollable panel is no simple task. There are a lot of details to consider when intersecting and rendering a virtual canvas. Aside from getting the architecture right, this was probably the most difficult part of implementing UI 2.0.

Nine-Patch

As was the case in UI 1.0, I wanted to have two distinct sets of controls: one for tester widgets and a dashboard, and another set for in-game UI.

For the most part, I have made use of the first set, which I call “Basic UI”. Basic UI Controls are visually simple: solid color backgrounds, borders, simple text.

BasicUi-Button

I dubbed the “in-game” control library: Flex UI. The controls are primarily based on using a Nine-Patch to draw their backgrounds and borders.

FlexUi-Button

A Nine-Patch is actually just a single texture sliced into 9 parts (as shown the in the figure below). The benefit to using a Nine-Patch is that you can keep crisp corners and edges, while stretching the texture in the directions you would naturally expect.

Nine-Patch Example

The only caveat is that you need extra data to know where to make the slices. For now, the Flex UI assumes that the corners are 16 x 16 pixels, but the intent is to make the system more robust to accept any slice sizes.

Stylesheets and Factories

At the very beginning of the redesign, I was hoping to implement some sort of Stylesheet system. After attempting a proof of concept, I realized that the same types of problems I had been trying to avoid were beginning to make their way back into the system. Consequently, I tabled the idea.

Later on in development, I resolved that the best way for Composite controls to create their sub-controls was for all controls to have access to a ControlFactory. So I created the ControlFactory as an abstract base class that is implemented by the Basic UI and Flex UI systems.

As it turns out, the ControlFactory is actually the perfect place to put the Stylesheet since the visual data can be applied to the corresponding control type. The only thing missing (without some substantial changes) is that the Controls cannot be updated dynamically if a Stylesheet is modified. I decided that while such a feature is cool to have, it would never be used in a final game project.

Final Thoughts

Although it took me a few months to complete, the UI 2.0 system is now in place and being used by the rest of the engine in the same capacity as the old Interface library. The effort was well worth it. I feel that the new architecture is far more flexible and extensible than the previous. Also, in addition to bringing the new features online, the overhaul gave me a chance to address a lot of the little things that had been bothering me, which is always nice.

Update 2015/06/07

It’s been a long while since our last update post. A lot has transpired since then so I’ll try to do my best to outline the highlights.

Input Delivery

First and foremost, we re-architected how input is delivered throughout the engine. Previously, the input state was fetched from the devices and packaged into a single data object, which was then passed to whatever systems required input. While simple, this paradigm had many drawbacks, which I hope to discuss at length in a future post.

The new system is event driven. This allows us to employ a layer system, where “higher” layers in the stack can consume input events so that “lower” layers don’t try to handle them as well. A system like this is essential for games that have interactive UI elements overlaid atop the game scene.

This was by far the most significant change we have made in the last year or so, but it is well worth the benefits it affords us.

Compression

We added a Compress library. At present, this is a light wrapper around zlib, though it could include other compression algorithms in the future should the need arise.

Our primary use case for compression was our proprietary asset file format: the Sauce Engine Assembly (*.SEASM). Below is a table comparing the results between our uncompressed and compressed file versions:

Model SEASM v1 SEASM v2 Diff
Pawn 80.47 KB 31.66 KB 39%
Bunny 1.73 MB 1.14 MB 66%
Dragon 21.66 MB 11.94 MB 55%

Pawn-Bunny-Dragon

Streams

We added a new type of input stream to the Streams library: VolatileInputStream. This allowed us to significantly shorten our file load times. You can read about the details of the VolatileInputStream in the Streams post.

Asset Pipeline

We spent a good amount of time solidifying our FBX model importer. It can now load geometry and basic material data from Blender. Extracting data from the FBX SDK in a robust manner is no easy feat.

Also, a lot of progress was made on extracting animation data as keyframes from FBX, yet, sadly, this is still incomplete.

Interface

Thanks to the event driven input system, the Interface library now has keyboard support. We also added a primitive implementation of focus.

Furthermore, we reworked how the Anchor property affects the control placement and added the ability to “dock” controls. The result is that controls can now be arranged in the same way that .NET supports.

We also finally added a TextBox control. This was the last of the “standard control set” we had been hoping to implement. A TextBox has a lot of functionality under the hood to make them work as expected: text input, caret movement, selection, and even Clipboard support (Ctrl+X, Ctrl+C, Ctrl+V).

TextBox

JSON

The most recent addition to the engine is the JSON library. We now use JSON as our config file format (previously they were XML). You can read about the details in the JSON post.

Visual Studio

Last but not least, we migrated to from Visual Studio 2010 Express to Visual Studio 2013 Community Edition. This is fantastic! As soon as we found out that the new edition was released and offered the same feature set as the professional versions, we jumped on it.

This required a few changes to our VSGEN tool which generates the project and solution files with our configuration settings.

JSON

Early last month, I set out to add support for JSON into our game engine. To my surprise, it turned out to be a fun and rewarding adventure.

JSON Logo

JSON is a very nice format that is fairly easy to parse. Its feature set is small and well defined, including:

  • explicit values for null, true, and false
  • numbers (integers and floating-point)
  • strings
  • arrays
  • hash tables

This feature set is perfect for configuration files, stylesheets, etc. In the past, I have used XML for these sort of things, but JSON is much more direct and compact.

Initially, I reached for an external library to wrap, just as we have done for many of the other file formats we support, namely: PNG, XML, FBX, and OGG. Of course, when it comes to external libraries, your mileage will vary. For example, we use TinyXML 2, as the basis for our XML library; it was a real pleasure to use — a very straightforward, well designed interface. The FBX SDK, on the other hand, is pretty atrocious.

Unfortunately, I wasn’t very satisfied when it came to JSON. Many of the C++ JSON libraries out there make use of STL and/or Boost, dependencies we have striven to avoid. Eventually I settled on RapidJSON due to its high praise on the web; however, about half way through my wrapper implementation, I concluded that its interface is not as clean and “wrappable” as I had originally thought it to be.

After some reflection, I decided that the best way forward was to roll my own. I found that rolling your own is an excellent decision for a few reasons:

First, the JSON format is relatively small, unambiguous, and well documented. This allows you to focus on the architecture and interface of your wrapper. I found the experience both valuable and refreshing.

Second, you are able to employ the use of your native data structures. Naturally, this is a great way to test your functionality and interface. In the case of Sauce, I was able to leverage the following Core structures: String, Vector, Array, and HashMap.

Last, but not least, I found it to be a whole lot of fun! It’s been a while since I’ve done anything like implementing a format encoder and decoder. Hopefully when you’re finished, you feel the same.

After I finished our JSON library, I converted our config files from XML to JSON with very little effort. The result is that our config files are more compact than they were with XML, and now we have the utilities required for future development. Overall, I feel it was well worth the time and effort.

Streams

In Sauce, we have a small, tight Streams library to handle the input and output of data in a standardized manner. After all, a game engine isn’t very exciting without the ability to read in configuration and asset data.

We use a stream as our main abstraction for data that flows in and out of the engine. In the case of input, the engine doesn’t need to know the source of those bytes; they could be coming from a file, memory, or over the network. The same holds true for output data. This is an extremely important feature that we can exploit for a number of uses, including testing.

Also, it should be noted that a stream is not responsible for interpreting the data. It is only responsible for reading bytes from a source or writing bytes to a destination.

As you might expect, we have two top level interfaces: InputStream and OutputStream. We’ve seen code bases where these are merged into a single Stream class that can read and write; however, we prefer to keep the operations separate and simple. Each of these interfaces has a number of implementations as described below.

Input Streams

InputStreams

The primary function for an InputStream is to read bytes.

Also, we store the endianness of the stream. This is an important property of the stream for the code that interprets the data. If the stream and the host platform have different endians, the bytes need to be appropriately swapped after being read from the InputStream.

Our Streams library features three types of input streams:

  • File Input Stream
  • Memory Input Stream
  • Volatile Input Stream

File Input Stream

This is probably the first implementation of InputStream that comes to mind. The FileInputStream is an adaptor from our file system routines to open and read from a file to the InputStream interface.

As an optimization, we buffer the input from the file as read requests are made. However, this is an implementation detail that is not exposed in the class interface; we could just as well read directly from the file — the callsite shouldn’t know or care.

Memory Input Stream

The MemoryInputStream implements the InputStream interface for a block of memory. In our implementation, this block can be sourced from an array of bytes or a string.

This implementation in particular is extremely useful for mocking up data for tests. For example, instead of creating separate file for each JSON test, we can put the contents into a string and wrap that in a MemoryInputStream for processing.

Volatile Input Stream

Simply put, the VolatileInputStream is an InputStream implementation for an external block of memory.

For safety, the MemoryInputStream makes a copy of the source buffer. This is because in many cases, the lifetime of an InputStream may be unknown or exceed the lifetime of the source buffer.

Of course, in the cases when we do know the lifetime of the source buffer will not exceed the use of the InputStream, we can make direct use of the source buffer. This is the core principle behind the VolatileInputStream.

Output Streams

OutputStreams

The primary function for an OutputStream is to write bytes.

Also, just like in the InputStream, we store the endianness of the stream. This is an important property of the stream for the code that writes the data. If the stream and the host platform have different endians, the bytes need to be appropriately swapped before being written to the OutputStream.

Our Streams library features two types of output streams:

  • File Output Stream
  • Memory Output Stream

File Output Stream

Similar to the input version, a FileOutputStream is a wrapper around our file system routines to open and write to a file.

However, unlike the FileInputStream, we do not buffer the output.

Memory Output Stream

The MemoryOutputStream implements the OutputStream interface for a block of memory. The internal byte buffer grows as bytes are written.

For convenience, we added a method to fetch the buffer contents as a string.

Again, this is extremely useful for testing code like file writers.

Readers and Writers

Admittedly, the stream interfaces are very primitive. They are so primitive, in fact, that they can be a bit painful to use by themselves in practice. Consequently, we wrote a few helper classes to operate on a higher level than just bytes.

We’ve found this to have been an excellent choice. It is not unusual for a single stream to be passed around to more than one consumer or producer. Separating the data (stream) from the operator (reader/writer) provides us the flexibility needed and the opportunity to expose a more refined client interface.

Readers

For InputStreams, we implemented a BinaryStreamReader and a TextStreamReader.

The BinaryStreamReader can read bytes and interpret them into primitive data types, as well as a couple of our Core data types: strings and guids. We use this extensively for reading data from our proprietary file formats.

The TextStreamReader can read the stream character by character, or whole strings at a time. This makes it ideal for performing text processing tasks like decoding JSON.

Writers

For OutputStreams, we implemented a parallel pair of writers: BinaryStreamWriter and TextStreamWriter. In both, we perform the appropriate byte swapping internally when writing multi-byte data types.

The BinaryStreamWriter can take the same set of data types supported by the Reader and write their bytes to the given OutputStream.

The TextStreamWriter can write characters or strings to the given OutputStream.

Summary

The Sauce Streams library has been a vital component to our development. We use it to read in models, textures, and configuration files; and we use it to write out saved games and screenshots.

We hope that this high-level discussion will help our readers with designing their own set of stream classes.