Hale's Folio

An introduction to the thoughts and works of Chris.

Before working in a professional setting with 3D graphics I made game levels for Source game engines. The experience exposed me to the tools and technologies that help make games enjoyable as well as to a community of others who made and played them. It should be no surprised that I later wrote a 3D renderer for the Quake III BSP map format from which these games derive.

I started by learning basic techniques. Lighting scenes, applying textures, creating terrain, and constructing indoor environments. It takes some time to hone these techniques. Notably, it is easy to be fooled into thinking the environment you’re creating is as large as you think it is until you jump in a game to play it.

Once I understood the fundamentals I focused on gameplay. Greyboxing is a method for prototyping that allows you to focus on play testing concepts before filling in the detail. Greyboxing also means you don’t have to be an expert in, say lighting, to create a solid experience.

A friend and I would playtest a map to discuss design. Sizing was invariably a problem: Was there enough space for variance? How many players does it accommodate and how does it scale? Are the points of contention where we intended them to be?

The game mode greatly impacts the design of a map from concept to realization. For game modes like capture the flag we aim for balanced gameplay. This is easily achieved by making maps symmetric. Asymmetric maps with balanced gameplay is tricky. Games like L4D trade off balanced game play for expansive maps.

The greatest reward was seeing maps I created being enjoyed by others. This gave me an opportunity to observe and incorporate feedback. It was neat to see someone else modify a map with custom textures after I had done the grayboxing.

Quotemarks is a project I began two years ago in my spare time to practice software development process. In recent years I’ve been the one providing the technical insight while the high-level nature of a project has been the responsibility of a peer. This project allowed me to wear different hats such as that of a project and product manager and to apply what I’ve learned to better inform future endeavors. What follows is the process I used in creating what eventually became Quotemarks which I shipped just in time for the new year. First, I walk through defining a vision and creating prototypes to rapidly iterate over ideas. Then, armed objectives towards a minimum viable product I focused on process where I delve into some insights in documentation.

I chose a bite-sized project so as to not get caught up in the details of a massive project. While nothing novel nor technically sophisticated, I settled on a mobile app for organizing a collection of quotes that I kept finding often in the most peculiar of places. I was unhappy with the many apps and websites already out there which focus on discovering quotations that someone else has already curated for you. Instead, I wanted individuals to collect found quotations in the way that I had. A mobile app seemed like the most intuitive solution.

Vision Through Values

Early on I imposed upon this project a defining value: Data liberation. This is the suggestion that data you’ve created should be accessible to you. My view is that any rational person would not use this app unless they had a way to get their data out of it. The outcome of this value was that I made data exporting a day one feature. This up-front cost turned out to be beneficial for speeding up early prototyping and development.

Personas, Use Cases, and Prototyping Rapidly

The first step was defining initial requirements from which to build a prototype. I create two personas almost like character descriptions in a theatrical play. This helped me clarify who I thought my potential users might be. If this weren’t a solo endeavor, these personas would have also been valuable when collaborating with other people over the lifetime of the project. People come and go and organizations pivot. It’s easy to forget who your users are so personas keeps everyone grounded. These personas informed the project’s use cases which turned into feature requirements.

At this point some might drive in to code if they haven’t already. Hold on, we don’t know exactly what we’re building yet. Instead, I started by creating a low-fidelity “paper” prototype to allow for rapid iteration of it. I proposed two different sensical interface paradigms before settling on one which from I refined. These prototypes are not really paper but rather PDFs containing rough sketches of interface components with links between state snapshots of the interface. Setting the PDF viewer to view as a single page gives the illusion of a navigate-able interface. This is a powerful tool. It’s one thing to come up with an idea and it’s another thing to rapidly test whether it really works. The intention behind these black and white paper sketches is not to fool anyone into getting caught up about the appearance of the application. After all, this was an exercise in information architecture and user interaction rather than aesthetic design. The result seems obvious now which is perhaps an indicator of good choices.

Project Management and a Minimum Viable Product

With an initial prototype I faced a conundrum about how best to derive a minimum viable product. Ideally, I’d document requirements, estimate time, and then execute. This is like floating a boat down Niagara Falls. The reality was I was still defining the product and there were still some lingering questions. Us engineers would like everything to be perfect from the get-go so that we don’t have to repeat effort. That doesn’t always work out since sometimes we don’t know what’s going to work in context until we try. What I needed was a process. I originally divvied tasks up based on role but this was too much overhead for a solo project. I settled on informally using Kanban cards. I’d occasionally re-assess and re-prioritize based on how things were going.

With a process in place, I needed to validate my work by way of a minimum viable product. The rapid prototype helped to go through various scenarios but it failed to place the app in context. This is like building a boat having only ever seen pictures of bodies of water. In particular, I had a feature where users could assign a quotation from their collection to be a “quotation of the day” that displays from outside of the app as a system extension. A prototype wasn’t much help here; I had to try it out. This led to defining a minimum viable product by prioritizing features and by considering what questions I needed to answer. I ended up with piecemeal milestone builds for review until I reached my objective.

Defining the necessary feature set for a minimum viable product is not always clear cut. In particular, when I first submitted the app I was surprised by a rejection from App Store reviewers on grounds of “app completeness.” Although I had spent the time nailing down the user interface and implementing the important features to make it a usable product my design skills leave much to be desired to the point that someone else thought I hadn’t gone far enough.

What surprised me most was deciding what to keep and cut under a looming deadline and then how to go about defining subsequent releases. At last minute I dropped an important feature that wasn’t quite ready in order to meet the release deadline. In the end, the release got delayed by an external factor and in the interim I addressed the issue to make the release I had intended. In retrospect, while it’s easy to get caught up in the moment, hitting that deadline was less important than getting the release done right.

Documentation as an Artifact

Throughout the development process I produced documentation as a form of communication and record-keeping. Organizations often treat documentation as secondary or outright omit it. Knowledge about how a product is intended to function and rationale for decisions made are locked away in people’s minds and eventually forgotten or mis-remembered. It wastes time and creates bugs where the root cause is human negligence. While there’s a tradeoff to consider this is generally why documentation is critical.

Fortunately, just about anything can be construed as documentation. Some things that come to mind are test cases, code, requirements, personas, use cases, prototypes, email and chat correspondences, work notes, commit logs, tickets and comments in an issue tracking system. Documentation empowers people to do their best and so I feel that documenting should be the responsibility of everyone.

For this side project the best I could do was be mindful. My initial intention was to have living documentation and release documentation to match the release cycle. I would have liked to incorporated documentation into my development process to accompany deliverables as opposed to treating it separate. Unfortunately, an all too common scenario was that I would aggregated documentation only when revisiting a topic. Nonetheless, the documentation I had produced proved exceedingly helpful for me when needing to revisit certain topics.


For this project I had an opportunity to wear many hats which greatly increased my understanding of the essence of different roles. I am not a designer but I do think having an understanding of the user experience is helpful for any engineer regardless of their role. It was fun to experiment as a product and project manager where I could see myself excelling in these roles in the future. At the end of the day I am a software engineer.

A decade ago, after having created a content management system for an organization that I co-lead, our website traffic was starting to pick up. It became clear that we needed a way to understand where people were visiting from. So I created self-hosted web analytics software called Grape Web Statistics. It was illuminating to see how information travels through different communities on the web in a time before Twitter.

The design and functionality was inspired by a commercial software solution named Mint. I admired their interface but back then software development was a pastime and I had no operating budget. It was simultaneously an opportunity for hands on learning about tracking as well as for me to open source the product for those who couldn’t afford Mint.

The biggest challenge was analyzing the “user agent” information that tells a website what web browser a visitor is using. While there is a standard in place that allows parsing of the user agent, vendors frequently broke the standard (e.g. spoofing legacy software for compatibility reasons) or altered the structure of this information between versions of the user agent. This made the categorization of user agents a chore that required constant monitoring and re-evaluation of our parsing implementation. To my amusement, this problem cropped up many years later as I was consulting with a client who hadn’t considered the potential consequences of toying with the user agent information.

Website tracking methodologies and regulations have evolved since the sunset of Grape which has consequently made this topic more politically sensitive. Tracking have become more invasive through “super cookies” and data mining/sharing initiatives. The Electronic Frontier Foundation’s Panopticlick project illustrates how easy it is for a visitor’s web browser to be uniquely identified across website operators. Privacy advocates have responded by way of the EU’s cookie directive, the Do Not Track policy, and data mining opt-out forms (if you are knowledgeable enough to know where to look). All this said, it is certainly valuable for content creators and organizations to have at least some basic understanding of where their visitors are coming from.

The Quate CMS was perhaps my first major product that I saw through the entire development process from conception, design, implementation, and support. I started the project from an internal need to manage website content creation for an organization I co-lead. I open sourced it to be made available freely to others and it became our flagship product. To my pleasant surprise it was used by a couple of organizations in the local community during the peak of its life.

At the time I was displeased with alternative solutions that imposed artificial design restrictions on the appearance of the website whose content it was managing. I wanted users to be in full control of the appearance of their website and for it to be simple and approachable. So the Quate CMS turned into a general purpose content management system and I later spun it off to bootstrap a number of other projects.

Looking back a decade later provides a curious perspective. Ultimately, the project served as a test bed for us software developers to learn about the technologies we were using. Our focus was primarily concerned with making sure functional needs were met when we should have also paid more attention to architectural design choices. We were young and it helped us to build the fundamentals needed for future projects, learn valuable lessons about people and technology, and drive our educational ambitions forward.

I wrote this introductory article as a part of a collaborative series on software development tools and good practices:

“So you have discovered a bug in your project. You might ask, ‘How long has it been this way,’ and ‘What caused it?’ If you are unfamiliar with the project’s source code you may also be wondering where to start. We would like to identify the root cause of a bug to understand why it was introduced so that we are confident we are making an appropriate fix. Otherwise we might be treating a symptom which may result in consequential bugs that leaves our codebase in a poorer state. Git’s bisect command can be a fantastic tool for identifying the root cause of a bug.” Read the full article.

“There is no question that push notifications are changing the way we advance in our daily lives. These brief interruptions cultivate a close relationship between us and our technology. This interaction is no longer restricted to email, phone calls, or text, but to individual apps on our phones, tablets, and wearables such as watches that travel with us wherever we go. Push notifications have become a must-ship feature for apps. Just as go-to-market strategies are necessary to ensure the success of any app, so is having a push notification strategy.” Read the full article.

“We’ve all heard the phrase ‘with great power comes great responsibility.’ Assertions are just that: a powerful tool for software engineers to document procedures, isolate issues when they inevitably arise and identify potential problems early. The key is to know how to use them responsibly and effectively. This article will cover what assertions are, and when and why to use them.” Read the full article.

“You’ve been handed a crash report for your app but the stack backtrace contains indecipherable memory addresses. What’s a developer to do? In short, you’ll need to apply debugging symbols to the stacktrace to make it human-readable, a process which we call symbolication.” Read the full article.

In applications of computer graphics we often require real-time rendering of three-dimensional geometry. As geometry gets more complex there becomes a high computational cost to render large spacial representations.

For interactive 3D applications such as video games, a graphics renderer may take advantage of several geometry culling opportunities. By representing geometry in a data structure such as a binary space partition (BSP) tree the renderer may perform view frustum culling on the subdivided geometry where geometry is reduced based on the placement of the scene’s camera. Another strategy is to pre-compute a potentially visible set (PVS) of tree nodes from any given node so that the renderer may cull occluded geometry.


For this project I wrote a BSP renderer to measure performance in real-world applications using these two techniques. I implemented Id Software’s BSP file format and used game maps from their game Quake III to examine practical gains from using PVS and view frustum culling. The geometry culling from pre-computed PVS generally offered better performance gains than from view frustum culling. A combination of the two yielded best results.

Some professors have an malicious desire to assign group projects. The inevitable problem with assigning group projects is aligning everyone’s schedules for group meetups. Attempts at finding reasonable meeting times are often thwarted by someone’s late night lacrosse practice or choir concert. This can easily degenerate into a verbal game of battleship. While coordinating between a group of three is reasonable, at five or six this becomes increasingly more difficult. Why not create a data visualization to help tackle this problem?

Every data visualization should begin with a question to answer. In our case we want to know at what times group members are available in aggregate. This question is more nuanced than it would first appear. Do we want to find a time that everyone is available or is a time that works for a majority acceptable? (Note that our visualization becomes binary in the former variant.) Are we looking for just one unit of time or consecutive time? Should the visualization emphasize times that members are available or unavailable?

A Candidate Solution: An Intensity Chart

This table combines everyone’s schedules into one place. It uses both hue and size to represent availability. Data points are represented as circles to convey a halftone effect. To further this concept I use a larger size and a dark hue to represent high unavailability. A possible consequence of this double encoding is that the user may have difficulty distinguishing between “complete availability” and “a single unavailability” due to the small size and faint hue. Although it’s not perfect it is certainly an improvement over the traditional and tiring alternatives were it to be integrated into a functioning product.