Overlapping path finding regions

I found myself wanting to have stairs and walkways that appear on top of other walkable areas. I had avoided this up to this point because of issues with pathfinding.

While the A* algorithm I use for the actual path finding isn’t limited by two dimensions (it works with any node graph), the game world and base polygons I work with are 2d, and the library I’m using to combine and clip the polygons is 2d. You can find more information on this here.

Background

Consider the following room with a upper level walkway that appears in front of the stairs:

Walkway in front of the stairs.

The player can be on the stairs or on the walkway. In the room script I need to track where they are based on where they entered the room and how they have crossed through the area near the top of the stairs. It’s reasonably straightforward to do this and ensure that the player character has the correct depth value (so he/she appears in front of behind the walkway).

If I want to describe the walkway areas for pathfinding, however, I end up with something like this:

If this were true 3d, this would probably work fine. However, in 2d a point where the polygon overlaps itself doesn’t really make sense. Is it on the walkway or on the stairs? Related, my clipping library turns this polygon into this:

Now it’s no longer ambiguous. However, we can freely walk between the stairs and the walkway without going through the upper stair landing.

I could possibly selectively place blockers depending if I detect the player is on the stairs or walkway. But that’s a lot of manual work, and I want a slightly more rigorous solution I can use in many rooms.

Pathfinding zones

My solution is to split the room into multiple pathfinding zones. Visually, this looks like:

The polygon for the lower level.

The polygon for the upper level.

Note that these have a small overlapping section near the top of the stairs. This is where we’ll transition between the two zones.

Along with this, I added the following functionality:

  • Rooms no longer have a single set of polygons that define their boundaries. Instead, they have a set for each pathfinding “zone”
  • Each actor (moving sprite) has a pathfinding zone number associated with it
  • I introduced a “pathfinding triggers” concept – these define small regions of the room that actors can pass through that will take them into a new pathfinding zone (this is how the actor’s pathfinding zone number is kept up-to-date). In the example we’re using, these would lie on the edges of the small transition area between the zones.

In terms of movement, the room’s polygon boundaries are used for two functions:

  1. To answer the question “can I be here?” for an actor. This is used when moving the player with the keyboard (or controller). In this case, I can simply choose the polygon for the pathfinding zone the actor is in, and make the query.
  2. To be able to find a path from point A to point B. This is used for “click to move”, for moving NPCs from place to place, and also (importantly) when typing in a command that causes you to interact physically with an object (the game needs to move the player to the object).

For the second case, we have the problem that the target pathfinding zone might be different from the one we’re currently in.

To solve this, I extended the pathfinding logic like so:

  • If the target position is in the same zone as the start, do things just as before.
  • If the target position is in a different zone (B) than the start zone (A), find a transition point T between A and B. Then find a path from A to T, and then T to B. Combine the paths and use this as the result.

The transition point should be inside the overlapping areas of the two zones. To avoid unnatural paths, the transition area should be as small as possible.

So the manual work to support multiple pathfinding zones in a room is:

  • Define the sets polygons for each zone (of course) – they must overlap a bit.
  • Create triggers on each side of the overlapping transitions between zones – these could be anything (manual coordinate checks, polygons, or control colors). Currently I’m just using control colors, which is a bitmask built into the background. Associated with the triggers in code is the pathfinding zone number they assign to any actor passing through them.
  • A transition point for transitions between any two zones.

I should in theory be able to support any number of zones per room, but currently I just support two. With a little bit of extra work, the pathfinding algorithm mentioned in case (2) above could be extended to n zones (in case you need to transition to an intermediate zone separate from the start and end zones).

Note also that “click to move” is still ambiguous, because all you have is a 2d point, and you are making assumptions about which zone it’s in. In practice this isn’t an issue though. It’s completely reasonable just to make it use the zone that the actor is currently in.

Overlapping paths for the player

 

 

Dealing with a limited palette

The color palette I’ve chosen for Cascade Quest consists of the combinations of the sixteen EGA colors (to simulate the palette used by those old games, which often dithered two colors together). Since color A combined with B is the same as B combined with A, there are only 136 actual colors instead of 256. Not only that, but several combinations result in the same color, so there are really only 126 distinct colors.

This fixed palette poses a number of creative issues when it comes to choosing colors. I’ve already gone into some detail as to how I support blending two colors, but in this post I’ll talk about choosing colors from an artistic perspective.

The sixteen colors on each axis (the original EGA colors show up on the top left to bottom right diagonal). This isn’t very useful for choosing colors.

 

This is an attempt to organize the colors in a more useful way: hue left to right, various saturation bands top to bottom, and brightness top to bottom in those bands.

Choosing colors by specifically specifying the two source EGA colors also isn’t very easy, since the resulting color the combinations would make isn’t always obvious.

Some of the questions I need to answer when choosing a palette are:

  • What other colors are similar to this one, but lighter or darker? (or less saturated, etc…)
  • What other colors are extremely similar to this one?
  • I need a dark area, what good color choices do I have?

 

Variations around a certain hue

For example, for a certain area I wanted to have an orange and greyscale palette. The greyscale colors are fairly well known, but I wanted a quick way to figure out my options for orange colors.

An orange metal tube, using colors chosen with my tool.

In the fast palette lookup post, I mentioned a quick tool I made to let me visualize my set of colors. I extended it to encompass different color spaces, and let me click on a color to get its palette index.

 

In addition to RGB, I have:

  • HSV
  • HSL
  • The most useful one, what I call “perceptual HSL”, where luminance is actually the perceived brightness. The human eye responds to green light the most, and blue light the least. Thus (0, 0.2, 0) is perceived to be much brighter than (0, 0, 0.2). More details can be found here.

Here’s a screenshot showing the colors I’ve used for orange objects in the “orange and greyscale” areas:

Five “orange-ish” colors of varying brightness.

 

Dark colors

The dark colors are pretty limited, and my color visualizers tell me that I should probably focus on blue or purple colors if I want variety in dark colors.

Similar colors

If I want to place some somewhat hidden messages in the game – like a color that’s only slightly different from another – this let’s me easily see that.

 

Color $78 is one of the greyscale colors (they are arranged along a column in the middle in this screenshot), but color $36 (a mix of teal and brown) is very close, and could be useful for a secret message written on a rock, say.

 

I have the ability to remap palette indices, so a secret message could be made very visible with a special light in game of course (that “remaps” the surrounding area). But it’s kind of a nice touch that when it’s “invisible”, it’s still actually just slightly visible (or completely invisible if I choose base color combinations that result in identical colors – Void Quest used this somewhere, for a hidden thing at night…).

Re-evaluating color choices

I can also look at palette choices I’ve made. I use five main colors for the “summit” scenes in the first act of the game.

Rock colors

I can visualize those five colors in my tool:

 

The summit rock colors

You’ll note that they are all completely identical hues, which maybe isn’t the best choice. Shadows should generally be a little bluer, and highlights a little more yellow. So I might be able to find a better option (although it looks like my choices are a little limited in this region of the color wheel).

As I become more conscious about palette choice, hopefully this tool will help me out in making good decisions.

 

 

Talk Sweetly To Printer

Cascade Quest was shown to the public for the second time at Seattle Indies Expo (a PAX West side convention) on September 3rd.

 

The booth

Lots of space at SIX this year – they moved it to a new venue, closer to PAX and with air conditioning. Since I haven’t yet commissioned a proper poster for Cascade Quest, I decided to spruce up the booth with green stuff (since the start of the game takes place in a forest): green tablecloth, fake plants, green LED lights. And I purchased a couple of plushies based on characters in the game.

I also had buttons to hand out to folks who played the game or signed up to the email list.

 

Cascade Quest booth, with marmot, raven, ferns, and hand sanitizer.

 

Changes from last demo

I’m still just demoing the first act of the game (and I doubt I’ll change that before release). I had reworked the text input system quite a bit since I last demoed it, and I thought it worked much better. I also changed a few puzzles, redid a few screens and added a couple more.

I also expanded the analytics I recorded slightly – it’s no longer just the text input, but also autosaves (upon entering each screen), screenshots, and the player’s position on screen for each piece of text input. This was in the hopes of being able to diagnose difficult-to-reproduce bugs (every bug I found was easy to reproduce though).

I also added a third demo machine since I had space on two sides of my table and I knew there were going to be more people at Seattle Indies Expo than at the retro game expo back in June. This was a good idea, since all three machines were occupied most of the time.

The stats

  • Players entered 2395 rooms throughout the day
  • Players typed 7067 phrases in to the game
  • 5 players made it all the way to the end of the demo
  • About 70 people sat down and tried the game
  • About 60 players made it out of the ranger station (where the first puzzle is)

That was a lot of data to process, and I ended up spending about a week afterwards making changes to the game to address issues or frustration that players encountered.

 

Stations full

 

Some of the things people typed:

  • Talk sweetly to printer (this was due to a bug in the way I handled putting paper into the photocopier in the first room – this frustrated a ton of people)
  • Love the printer (related to above)
  • Smash desk
  • Flip desk (someone’s getting mad – perhaps related to above)
  • Get some jordans so you can get hops
  • git gud

I added a bunch of in-game hints to explain how the typing system worked, and that alleviated much of the frustration that new-to-text-parser adventure gamers had.

I’ve also added auto-correct, so this eliminated a bunch of frustration people had with typos.

One of the significant changes was to make your text input more context sensitive – it will infer what you’re talking about depending on what you’re near. So ‘get‘ will get the object in front of you. ‘give mushroom‘ will give your mushroom to the person in front of you.  It worked a bit like this previously, but I expanded the scope significantly since the last demo.

In the original Sierra adventures, ‘look’ gave a general description of the room. With the contextual stuff, this now describes the object in front of you (or the room, if none). This seems like it was a lot more natural for people. I used my typing hint system to train people to type ‘look around’ to get a description of the room. Seemed to work well, so I think I’ll keep it.

I also added an in-game hint system, but very few people found it.

 

Making a hotfix earlier in the day, due to a game-breaking bug.

 

Some new changes

This time, there are a lot fewer big changes I needed to make in response to feedback. So that’s good.

I’m trying to formalize the conversation system a bit more, to make it seem like the characters are a little more alive. You’ll be able to ask them about topics and such (e.g. ‘ask about paper’) – this already worked in a limited fashion, but I’m making a bit more data-driven so it’s easy to add different responses to queries on various topics.

I may also add a way to identify the objects in front of you, or show hotspots. This might be part of some “casual mode” for more beginner players.

I need to make changes to the in-game hint system I implemented, because very few people found it. However, I felt it was maybe even a bit unnecessary, since I added a lot of signposting for the puzzle solutions (making the game a little less difficult).

Other notes

I added a secret death scene that I think only one person found. It made me very happy when they did, though.

I talked to lots of people who were excited about adventure games!

Seattle Retro Games Expo

At the Seattle Retro Games Expo on June 17-18, I demoed Cascade Quest to the public for the first time. It was a great experience, and a great way to get feedback from a subset of the gaming population.

The Booth

Cascade Quest booth at SRGE

I didn’t have time to commission a nice poster for the booth, so I just ended up using a large logo mounted to cardboard. The booth looks fairly cobbled together, but it was about as professional as other nearby booths, so I didn’t feel too bad. I also spent a day or two coming up with a demo mode of the game that plays through about 4 minutes of gameplay in various screens. This was probably better for attracting attention than any poster would have been. It shows lots of the text input, which I think was important.

I had two laptops set up (a good idea, since the power kept being cut the first day), one with a TV attached to it. I placed newsletter signup sheets on either side, and also had a signup form directly in the game (if you make it out of the ranger station, at which point you’re probably reasonably invested?). The in-game form ended up having slightly more signups.

How did folks react?

Even though this was a retro game expo, it is mainly focused on the 90s and early 2000s, and heavily focused on Nintendo and Sega. So I was a bit worried that it wouldn’t be the right demographic for Cascade Quest. That may have been true, but I still had a fairly steady stream of people playing the game both days. Most people who played (maybe two thirds) had played the old Sierra games before and were immediately intrigued by the similarity. For some, it was a completely new experience.

I would say the demographics for people who played my game tended towards older. They may not have been the majority of people attending the expo, but there were enough of them that it kept me pretty busy. I also had several small children (under 10? I’m not good at guessing children’s ages) play the game and enjoy it. Some were young enough that they didn’t really understand what was going on (nor were they able to progress very far), but still seemed to enjoy typing things in and seeing the responses. I did have one young girl play the demo to completion (solving most of the puzzles herself) over the course of about two hours.

 

Father/daughter team enjoying Cascade Quest

 

The good news is that I got overwhelmingly positive feedback about the game. However, I think most folks aren’t likely to give any negative feedback (critical feedback seemed to only come from other game developers). If someone doesn’t like it, they just play for a few minutes and leave (this happened a few times). But the vast majority of people played for a while and said they really enjoyed it.

Some folks were very excited, and told me I had really captured the nostalgic feel of the old Sierra games – but without the design problems they suffered from. This was really nice to hear. Many people didn’t even realize that the graphics weren’t actually EGA – they just looked that way (also my intention). And I got plenty of compliments on the graphics, which is nice for this programmer to hear.

I learned not to judge people’s enjoyment of the game by watching their expressions. Some had constant looks of confusion, anger, or frustration – but in the end said they really enjoyed it. Of course, it was nice to have people sit down and instantly start giggling at the responses the game was giving to their actions.

One person suggested (only half-jokingly) I include an object early in the game that – if you fail to pick it up – prevents you from finishing the game (of course, you don’t realize this until the end). While I am explicitly designing away those issues, this highlights the importance of keeping a good amount of nostalgia in there to appeal to older players. In that same vein, one person asked “can you die in the game?”, and was really happy to hear that deaths are indeed possible.

Analytics

The only analytics I had in the demo were tracking the players’ typed commands, and tracking whether or not the game reverted to a default response (thereby indicating it wasn’t understood). For future expos I want to be able to replay the players’ movements more completely so I can better understand what they’re trying to do.

My one regret is that I didn’t spend more time watching players play the game. I would walk behind them every once in a while to make sure they weren’t running into bugs, but in general I tried not to bother them. As a result, I don’t feel I got a good enough picture of what issues people ran into.

 

There’s that signup sheet lying on the ground…

 

It was painful at time watching people struggle with the puzzles. However, I began to realize that they should be struggling. In the end, those that made it to the end of the demo (or close to it) took about the time I had expected… roughly two hours. I want people to struggle, but I also want them to feel satisfied when they solve a puzzle (and say to themselves “ah, that’s so obvious, why didn’t I think of that?”). That’s a hard thing to gauge at a broad public demo like this though.

For some reason, players had a much harder time navigating the puzzles on Sunday than on Saturday. I’m not sure if this was just a random thing, or whether there was a difference in demographics (perhaps more hard-code players visited on the first day?)

Conclusions

People ran into a number of bugs with the demo. This told me that my object interaction architecture (which I rewrote several months ago) isn’t robust enough.

Many people praised the text parser autosuggest, but didn’t realize they could use it to actually complete words (they thought it was simply a reference for which words would be accepted). I probably need to tweak the UI a bit to make it behave more like a modern browser search box, rather than a programmer command prompt.

One thing that definitely stood out was that people expected “look” to describe the object in front of the player. Instead, a simple “look” is wired up to give the room description (like Sierra did), and you have to name the object to get a description of it. I do have the ability for many verbs (like “take” or “open”) to operate nounless, based on what’s in front of the player. It seems “look” needs to work that way too. I originally resisted this, because I didn’t want players to have to type “look around” to get a general description of a room. But the evidence was so strong that “look” needs to refer to a specific object that I will have to revisit this.

Some people thought the parser was more intelligent than it was, e.g. “tell my boss that I don’t know where to find paper”.

Other people thought it was dumber than it was. They would type “give copies boss” instead of “give copies to boss”. The problem with this is that the parser interprets the former as “give the boss to the copies” – that is, boss is the direct object, and copies is the indirect object (and we fail to get a valid response). As a result, I will probably lessen the restriction on the grammar and ignore the difference between direct object and indirect object for most scenarios (I struggle to see a situation in which both would be accepted and perform different but valid things).

Appealing to a broad demographic

I would really like to find a way to make the game appeal to a broader demographic than just those who enjoy the nostalgia. I had some success with this, but I think it’s still not accessible enough to modern gamers.

Further data to corroborate this is the ratings that Void Quest has been receiving. A lot of people really like it, but some people really hate it (a relatively high number of 1-star reviews). Unfortunately people who hate it tend not to leave comments on what they don’t like or what turned them off. So I’m basically left guessing. I have to assume that either:

  • the text parser turned them off and they didn’t realize that the game they were downloading wasn’t a point-and-click
  • the puzzles were too hard

I did have some people at Seattle Retro Game Expo be suspicious of the text parser at first, but then gradually get into the game once they got the hang of it. So part of it may be convincing people to keep playing long enough.

And so…

I’m currently in the process of refactoring some of Cascade Quest’s object interaction code to make it more robust and predictable. I’ll also integrate some of the puzzle feedback I got, and then begin work again on the other parts of the game.

 

 

 

 

Hints for Void Quest

A number of people are stuck on my Adventure Jam 2017 entry, Void Quest. The puzzles are pretty hard, especially if you haven’t played these kinds of games before. I tried to signpost the puzzles well, but I may have failed at times. Or maybe you don’t have tons of free time to walk around and figure things out! So without further ado, here are some hints. Highlight/select each bullet point to get ever more “spoilery ” hints about a problem.

Hints

 

How do I get the story started!?

  • How do I get the story started!? Take that todo list on your wall!

 

How do I change from day to night?

  • Go to bed (assuming you don’t have anything to do, and aren’t stressed out by anything)

 

I want a listening device!

  • You were gonna get one via mail order, but you procrastinated. Look around your cabin.

 

I want something to see in the hole!

  • Well, your lantern might help a bit. But…
  • You really want something better. You were so distraught after the incident with fido and the tree, that you failed to notice something on that darned tree. Easier to see in daylight!
  • I wonder who put that there? Maybe they had a way to view pictures too.

 

How do I carefully lower things down the hole to explore it?

  • There’s another thing that has stuff lowered into it. Parts of it are broken, could be useful.
  • It’s the crank on the well! And it fits over the hole.
  • You’ll also need something under your front porch.
  • It’s a spool of fishing line! Look at the crank and the spool in your inventory.

 

I need a stamp!

  • Check your mailbox.
  • Postcard. And it didn’t get postmarked!
  • Remove it from the postcard. Carefully…
  • With some heat.
  • By heating your iron on the stove.

 

I need some cash!

  • Did you find a note from your neighbor?
  • The note blew into the stump you tried to pull.
  • Who stole the cash? Where is this squeaky thief now?
  • The thief has a little hole on the top right of his screen. It’s in there.

 

How do I get gas for the truck?

  • Make sure you have performed your other tasks first.
  • Have you lowered both a mic and camera? Did something happen afterwards?
  • One of those things that “happened” might be the tractor’s gas tank. It landed nearby.
  • Might be easier to find at night. What was attracted to the colors on the tractor?
  • THE ANSWER:
  • The fireflies show where it is in your front yard at night.

 

How do I get the truck keys?

  • They’re on the same screen as the truck.
  • At night is a good time to see them with your lantern.
  • They’re in the well. Oh, but they’re just out of reach.
  • They float (wooden keychain). Change the water level in the well?
  • You’ve got a container that can carry liquids, don’t you?

 

Some other questions:

What’s that little mouse for?

  • It’s leading you to something. Might make more sense if you found your neighbor’s note. Which blew onto your property…
  • into your stump.

What are those black shapes I see sometimes at night?

  • I dunno… it’s a mystery?

That pile of rocks? Any purpose?

  • It’s just fun to throw things in the hole I guess.

Is there any purpose to the fireflies?

  • Yup. They subtley hint at something which is important later in the game.

What’s the deal with the hole? Who made it? What’s in there?

  • Haha, nice try. What’s in there? Well, (most of) your tractor, for starters. Maybe you’ll have to play Cascade Quest to find out more.

Fast palette lookups

Cascade Quest uses a fixed palette that is based of the 16 default EGA colors. The 16-color Sierra games used dithering to give the appearance of more than 16 colors. Given the relatively blurry monitors and TVs of the day, this produced a more convincing effect than it does on modern crisp LCD monitors. As a result, Cascade Quest uses the “undithered” versions of these dithered colors. That is, each base color blended with every other base color. This results in the palette on the left below. Note that due to duplicates, the result is 136 unique colors (16 + 15 + 14 + …. + 1), not 256.

 

Cascade Quest palette

16 color EGA palette on the right. Cascade Quest’s undithered palette on the left.

Using any limited-color palette (with arbitrary colors like VGA, or fixed colors like mine) presents some pretty significant limitations. The most obvious is that blending two colors together becomes much more difficult. We need to take the two RGB values, combine them, and then map this color back to the closest matching color in our palette.

Rationale

The brute force method for palette lookup is to calculate the euclidean distance between the reference color and each other entry in the palette. The closest one wins.

This is brute force approach is fine to do in the case of a simple remap of each palette index to another – for instance, Cascade Quest does this to support automatically darkening (for sprites in shadow) or converting sprites and backgrounds to a certain lighting setup. These are fixed “remaps” that only need to be calculated once at startup.

However, it is much too slow to do on a per-pixel basis. We need per-pixel blending to support high quality scaling and pseudo alpha-blending. Below is an example of a mushroom. The original image is on the left. In the middle is what happens when you scale it to 73% of the original size using nearest neighbor sampling. This is what Sierra’s SCI1+ engines used for scaling, since it’s quick and you are only ever dealing with existing colors in your palette (SCI0 did not support scaling). On the right is a version that is scaled using bilinear filtering, with the resulting colors remapped to the palette.

 

Mushroom scaling

 

You could argue that the middle mushroom is more true to the retro aesthetic (but the right-hand one is clearly a more accurate representation of the mushroom). However, look what happens when you have an image with high frequency details:

 

Bars scale

 

All the bars have gone missing in the center image, since the nearest neighbor sampling at that zoom level ended up sampling only the white pixels. On the right is the result using bilinear filtering and remapping to the palette. When the scaling level changes smoothly in game, the problems with nearest neighbor sampling become even more distracting.

Initial approaches

How can we more quickly map an arbitrary RGB value to a palette index? We could generate a lookup table. However, to do this accurately, we’d need an entry for every possible color: more than 16 million of them! For optimum speed, whatever lookup table we use needs to be small enough to fit well within the processor’s L1 data cache so that memory access doesn’t become a bottleneck. A 16 megabyte array doesn’t fit the bill.

We could of course just make the grid coarser. Instead of 256 values for each RGB component, we use quantize to 16 or 10 or whatever. Once we do this however, we start to significant amounts of incorrect results.

We can perhaps produce more intelligent quantization. If we look at all our palette colors, the combinations are such that there end up only being 10 unique values for each RGB component (in fact, the same ones for each component). This is a property of the source colors we are using, and wouldn’t be true for an arbitrary palette of course.

 

Discrete RGB component values

In the above image, I’ve drawn in grey lines halfway between the 10 possible component values. We can define buckets between each of the grey lines (a total of 10 buckets). Those buckets have the property that any component value that falls within them will have the contained value (black line) as the closest color component. So we end up with two lookup tables. One (of size 256) maps a single RGB component to a bucket. And the other (of size bucketCount ^ 3, or 1000 in this case) maps the three buckets (R, G, B) to an actual palette index. Of course, we also have to do a one time pass to calculate the closest palette color for each bucket.

So, this would actually work perfectly if our palette contained every combination of those 10 discrete values (10 * 10 * 10 = 1000 palette entries) – but it of course doesn’t. As a result, it falls apart quite spectacularly. We can precalculate the nearest palette index for a particular (R, G, B) bucket, but the actual nearest color to any arbitrary (R, G, B) value in that bucket might be different (again, this wouldn’t be an issue if our palette had every combination of our 10 discrete component values).

I thought it would be useful to visualize my palette’s color distribution, so I put together a quick Unity scene that shows the RGB color space and where each palette value is within it.

Palette colors

Looking at a 2d cross section also helps. Here’s looking along the blue axis (thus we see the red and green distribution:

 

Palette cross section

Getting it done

 

Once I realized that a perfect solution to this problem is probably not possible, I set out to do the best I could. To get more concrete results, I came up with a random test corpus. 1000 randomly chosen colors. They aren’t completely random though. Instead, they are random blends of two randomly chosen palette colors. This results in a color corpus that is closer to what might be actually used in the game for blending operations.

For this corpus, the bucket method described above resulted in 349 of the 1000 colors being incorrectly matched. Pretty bad – in fact, barely better than using an even distribution. I tried doubling the resolution of the buckets. That improved the wrong matches to 215 out of 1000. But at the expense of a bucket lookup table of 8000 entries (20 * 20 * 20).

I then tried some even distributions. Everything from 10 evenly distributed buckets to 20. While generally using more buckets reduced the number of mismatches, the relationship wasn’t monotonically increasing. Fifteen buckets proved to be good bang for the buck (et) – 234 mismatches, which ended up being fewer than with 16 or 17 buckets.

Still sure that a specific bucket distribution could produce better results, I threw some computational power at the problem. I tried lots of slight variations for the boundaries on each bucket. This ended up in finding a bucket distribution that results in only 149 mismatches out of the 1000 color corpus. Not too shabby.

So the approach I’m using (for now), maps each RGB component to a bucket index. Then the bucket index is used to index in the main lookup array (15 * 15 * 15 = 3375 entries).

Where is this used

I use this quick lookup for alpha blending and bilinear filtering during scaling. However, I use the slower accurate version for the global color remapping, since this is only done once at startup and only needs to evaluate 136 unique palette colors.

 

Outhouse

 

Outhouse vanish

Outhouse appears from thin air

Performance

I timed the drawing performance of the above outhouse alpha blending cycle. The frame draw times for the outhouse were:

  • Alpha blended, using the slow (correct) palette lookup: 13.3ms
  • Alpha blended using the quick lookup: 0.3ms (44 times faster)
  • No blending: 0.03ms.

I also tried seeing if the size of the lookup table made a difference. That is, 10 buckets instead of 15. It did not (though I’m sure increasing the number of buckets would start to slow things down).

A note on gamma-correctness

I’ve been ignoring this throughout this post, but it’s an important bit to touch upon. The base EGA colors are in gamma-corrected space. That is, those values are exactly what is displayed on the monitor. To properly combine them (to get our 136 color palette) so that they look just like the dithered colors, we need to convert them to linear, take the average of the two, and then convert back to gamma space. If we don’t do this, things will look significantly different.

 

Gamma

Dithered color combinations with and without proper gamma correction.

 

Now, this should also be done when blending. I don’t do this (I just directly blend colors in gamma space) for a few reasons though:

  • It would be slower, since we need to do the conversion to linear and back for every pixel
  • The resulting colors will already be wrong anyway, since we’re mapping to our fixed palette. So the “perfection” won’t be noticeable.
  • Older graphics hardware has gotten away with not doing this for texture filtering (although in general modern graphics hardware does this properly).

Void Quest post-mortem

 

Over the two weeks from May 5 to May 19, I worked on an entry for Adventure Jam 2017. Leary of taking two weeks off from working on Cascade Quest, I decided I could use the time to:

  • Explore the backstory for one of the characters in Cascade Quest, and maybe unblock some creative hurdles
  • Use this opportunity to get some feedback on the input system and style of interactions in Cascade Quest
  • Flesh out some of the workflow issues and weak points I have with my engine.

To this effect, I used the same engine (built on top of Unity) that I use for CQ. Over the course of the jam, I made a bunch of improvements (or maybe let’s call them changes) to the engine to support new features.

Checking out my front yard

 

Approach

I decided to do pretty much all the coding and puzzles first, and the art last. That turned out to be a good way (for me at least). I find art to be more predictable in terms of the time taken. This does, however, mean that for the first 10 days of the jam the game didn’t really feel like a game. Everything was so ugly to look at that it takes a lot of imagination to envision what the end the product will be. Luckily for a short project like this, motivation doesn’t really wane anyway though.

Shortcuts

I took a lot of shortcuts in order to get the amount of content in that I wanted. A few that I can recall:

  • The player can climb over a fence as a quick way to another room. Instead of animating this, I just wipe to black and wipe to the new screen.
  • There’s a pickup truck whose engine needs to be started. To avoid having to draw door animations, I placed it so the driver side door was behind the truck as seen from the viewer (and I think I claimed it was missing, to avoid having to deal with “open/close” interactions).
  • There’s a scene where you bury something. Again, instead of animating this, I fade to black and just do sound effects.
  • A number of objects you have to pick up have no visible onscreen representation. They are hidden in holes, in vehicles, and so on.
  • I use a very constrained view of things, so that, for example, you never see your full cabin from the outside.

Issues with the parser and object interactions

I plan to do a post about this specific to Cascade Quest soon, but some of the suspicions I had about flaws in the way I’m doing things became more real.

I am trying to come up with a clean and easy way to map text parser input to verb-noun-noun. A while ago I changed from this kind of code:

(if (Said 'take/paper')

    ; give the paper to the player, etc...

)

to a more data-driven approach, where “features” are present in the current room, and have a noun associated with them, and then a “doVerb” method that handles various actions on them. Then there is a more generic piece of code that inspects them and figures out the right feature whose doVerb should be called. This more like how a point-and-click game would be structured.

It gets complicated of course, because “use gas can on truck” should map to “use-can-truck”. But so should “fill truck” (if you have the gas can). So features need a way to cleanly customize the way they map text input to verb-noun-noun. And it gets even more complicated when you have multiple objects with the same name. And even more complicated when you have inventory items that also have in-room representations.

This topic is too large for this port-mortem, but I’ll make a separate more detailed post about it.

 

Improvements to the engine

Lantern glow

 

I made a number of improvements:

  • I now support per-pixel palette remapping. This is how I manage to get the glow around the player when they hold the lantern at night. The source sprites and background are all in normal color – but then the global palette is switched to a dark blue one for the nighttime, and then the glow sprite uses a pinkish palette just where it is
  • Related to the above, sprites can disable drawing to the visual screen, and simply draw to the “per pixel palette index” screen to cause the palette to change in an area.
  • I support post-processing effects (really I should just move everything to the GPU, but…). Originally I was planning to use this to make it rain, but I didn’t end up putting that in. The only place it’s used is for the thing that comes out of the hole during the first nighttime.

Art

Here I’ll just go over the backgrounds in the game, with my own comments on what could be improved (if you’re any kind of artist, I would love feedback in the comments).

 

Backyard

A problem with many of my backgrounds in this game is the blankness of the ground. I need to have more texture and more variety (granted, this is the background without the sprites). I’m also not quite satisfied with the fence, it perhaps need to have higher contrast.

I didn’t quite know what to put in the background behind the bushes. Originally it was a cliff, but I ended up changing it to abstract trees, because I thought I could make them look scarier at night.

I do like the moss on the roof.

Next up is the room to the south of this:

Neighbor’s property

I like the framing I have here. The orange leaves and the tree silhouettes. I used jagged sharp edges for the silhouettes to try to give them a creepier look (especially at night!). I drew this one second, and started to be more intentional about this.

I’m not happy with the tire tracks, but my limited palette was perhaps hurting me here. And again, the blank ground.

The burned house, I dunno. I needed to distinguish the wood planks, and that’s the darkest grey I have, so that’s really all I could use.

Next up is the front of your cabin:

In front of your cabin

Again, I used the orange leaves and menacing silhouettes to frame the scene. I needed some variety for the background – so not just bushes, but cliffs (they were also helpful for the story). I’m reasonably happy with the cliffs. I referenced Pedro Medeiros’ wonderful pixel art tutorials for these. Probably the moss in the cracks could have been done better.

Again, the blank plain grass and the relatively featureless tire tracks.

One thing I’m not happy about is the cabin. I think placing it straight on was not a good idea (although it made the door art easier). It would look more interesting if it were slightly off angle.

Finally, the cabin interior. I did this with less than 24 hours left in the jam. At first I wasn’t quite feeling it, then I got into a nice zone where I was really enjoying the art.

I was a little un-enthused when it looked like this to start with:

Cabin first iteration

This was probably an hour’s worth of work, at least. Then I spent just a few minutes adding some “defects” to the walls and floors to break up the straight lines. Basically just drawing some black lines:

 

Cabin

Suddenly, so much better! After doing this, I did a bit of the same to the cabin exteriors in the other backgrounds.

And with all the props:

 

Cabin with sprites

 

 

 

Playtesting

My goal was to get a playtestable build ready by Wednesday (the 12th day of the jam), but that didn’t happen until Thursday. I managed to get two people to play (in total) about 2.5 hours of the game. This was absolutely essential and totally worth the distractions. I only wish I had gotten more people to play it. Even a simple parser-based game really needs a variety of people to try it to understand how different people will word things differently. As a result, the game is still fairly unpolished when it comes to accepting input. This kind of sucks, as one thing I want Cascade Quest to be good at is avoiding any “guess the word” issues.

 

WebGL build

I decided to build a WebGL version, since that gets more people playing the game. There were a few issues however. One, which I’ve encountered before, is that Unity puts two backspaces characters into the Input.inputString every time the backspace key is pressed and released. This messed up the text input, obviously.

The other more devious issue was a “compiler bug”. Or rather, some flaw in the pipeline Unity uses to generate c++ from IL, and then javascript from that. The end result was that one particular switch statement just did not work. After an hour of very “slow iteration” debugging (it took about 10 minutes to build the game, upload to a server and test it) with Debug.Log, I finally switched the logic to an if statement and that fixed the issue. I was lucky that the bug revealed itself in a very visual fashion (the sprites were only partially loading) – and it’s kind of scary that a switch statement was just broken. Where else might that be happening? I’ll need to come up with a simple repro project and submit the bug to Unity.

An Overview of Save Games

In the old days, Sierra was brutal when it came to save games. Death was around every corner, and it was up to the player to remember to manually save progress. Although Cascade Quest still has a good amount of death, that kind of unforgiving behavior isn’t really acceptable these days. On top of that, save games must survive patching – another complexity that wasn’t really an issue back in the day.

Surviving patches

Cascade Quest allows saving your game at almost any point. When this is done, the modifiable state of the world is serialized and saved to disk. This includes all global variables, all script variables, and all objects and their properties.

 

Save Game dialog

Save Game dialog

 

Changes to the game code can easily invalidate save games. Objects may be added or deleted, variables may be added or removed, and so on. Trying to reload an old save game with a changed codebase would create all sorts of problems.

Luckily, the way the game engine is set up, there is a limited amount of information that gets passed from room to room. When a new room is entered, all stateful scripts are unloaded, and then reloaded on demand. So the only state that gets transferred is the set of global variables. The new room is responsible for instantiating all objects that are relevant based on the global state.

And even among the global state, there is only a portion of that that is relevant persisted state – basically a set of global flags and numbers (that indicate if certain events have happened in the game, for instances).

Upon entering a room, this global state can be saved off somewhere. Then when the player saves a game, this data is saved alongside the rest of the save game data. If, when loading a save game, we see that it was for a different version of the game, we can instead load the global state that defined the game when the player entered the current room. The downside is that this puts the player back to when they entered the room (some UI will be needed to explain this, no doubt).

Autosave

This is also a trivial way to implement autosave. Upon changing rooms, we save off the above-mentioned global state as the autosave. This does mean we need to be careful sometimes: there are cases when a player’s death is certain when they enter a room. In those cases, logic is needed at room startup to detect this and disable the autosave. These cases are rare, however.

Summary

Using that limited set of global state doesn’t completely absolve us of thinking about codebase changes affecting save games of course. Patches to the game can’t ever remove any global state – we can only add new flags or variables. So care must still be taken to have the new code correctly interpret the old code’s state. But a set of flags and variables is much easier to manage than an entire heap of instantiated objects and other state.

 

Text parser autosuggest

The text parser in Cascade Quest is similar to those in the old Sierra games, and decomposes a user’s input into a grammatical tree which is then matched against clauses in the game’s scripts.

This won’t be a post on how the text parser works, but instead on how the autosuggest is implemented. Typing in all your commands can be tedious, but autosuggest makes it a lot simpler.

 

Auto-suggest in action.

Auto-suggest in action.

The basics

The game has a list of words it understands (about 1400 currently). They are tagged with their grammar class (verb, noun, etc…). As the player is typing, we do a prefix match against words in the database. The player can tab to autocomplete the current suggestion, or arrow to desired suggestion. Or they can just use the suggestions to ensure their spelling is correct.

To help avoid poor suggestions, we apply a ranking system to all matching results. Here are some of the heuristics:

  • When suggesting completions for the first word in a sentence, we prefer verbs over nouns (a more complex system could analyze the grammar tree, but I don’t do that).
  • Words that are in any currently loaded scripts are ranked higher.
  • Words that are specifically used in the current room’s script are ranked even higher.
  • There is a set of commonly-used preferred words that get a ranking boost. These include common verbs like ‘look’, ‘get’, and so on.
  • The most recently-used words get a boost (this does introduce some unpredictability in the results, however).

At some point, I’ll probably have to implement a banned list too. The game understands some “naughty” and/or trademarked terms (or in some cases special tokens that aren’t words) that I don’t want to display in an autosuggest.

Another possibility might be to implement automatic spell-checking (i.e. correcting mistyped words after the fact). These are things that will be narrowed down once more thorough play-testing is done.

Does it spoil the spirit of the game?

When I first implemented this, I was worried about autosuggest giving away secrets in the game. I soon realized, however, that this isn’t a big problem.

Will it give away spoilers to some of the puzzles? When the puzzles are properly-designed, I have yet to see an instance where this is the case. If the autosuggest were spoiling things, that tends to mean you’ve got a “guess the word” puzzle. This is what I’m specifically trying to avoid. The possible interactions and objects around you should hopefully be obvious.

There are some cases when an object on-screen is hidden, and only revealed (and described) when you interact with another object. In that case, yes, the auto-suggest could give some clues. However, with a vocabulary of over 1400 words, it’s going to be difficult to gain much insight from the autosuggest results.

Possible improvements

Currently, the autosuggest is sourced from the compiled resources. These make no distinction between synonyms of words. For instance, all these words are treated equally: acquire, capture, catch, gather, get, grab, obtain, pick, take, want. A number of these are rarely used and can clutter autosuggest results. In this particular case, ‘get’ and ‘take’ belong to our list of preferred words, so they would get boosted.

 

Ask autosuggest

I’m trying to ask you a question, not “acquire” something. But “acquire” is indistinguishable from “get”, which has a boosted ranking.

 

A better example is perhaps the following synonyms: cliff, hill, knoll, mountain, volcano. In a room with a volcano, the script source code will reference the term ‘volcano’. This is the term I want promoted mostly heavily. However, all these terms are synonymous from the interpreter’s point of view. I simply need the resource compiler to output more information regarding which words are used in source code. Playtesting will determine how much of a problem this clutter ends up actually being.

 

Ask autosuggest

Ah, that’s better. Still not great though.

2d polygon pathfinding.

Welcome to the first post in the dev blog for Cascade Quest! You can expect semi-regular posts about behind-the-scenes implementation details, new artwork, or any other interesting updates. Let’s get started on the first topic: pathfinding!

Character movement

In the original Sierra games (AGI, and early pre-VGA SCI), movement was accomplished by moving the player along straight lines: either with arrow keys, or by click-to-move with the mouse.

The character’s walkable boundaries were defined by a bitmap that indicated which pixels were walkable and which were not.

Later Sierra games (VGA SCI and up) and LucasArts’s SCUMM engine used more intelligent pathfinding to automatically route the player around obstacles. As these games were primarily played with the mouse, this more complicated implementation made sense.

Up to now, the Cascade Quest code had used the old system – only direct lines were supported. It was up to the user to move the player’s character around obstacles manually. This sort of made sense, as the keyboard is the primary input mechanism in this parser-based game.

There are problems with this though. During cutscenes, it is necessary to move the player’s character (or NPCs) to specific locations. To do so, I either need to disable collision checking for the actors, or be absolutely certain they are in a good position to reach the destination without bumping into something. Both these things are error-prone. They can result in bugs where the actor never reaches the destination and cutscene is blocked – or, in the best case, I detect this and continue the cutscene in a visually broken manner where the actor is not in a location that makes sense.

Proper pathfinding

So I decided I needed to implement proper pathfinding. While back during the early Sierra days this may have been quite a magical feat (in fact, Sierra has a patent on the algorithm they use), today it’s a commonplace “solved” problem. Make a graph that connects adjacent walkable areas and run an A* algorithm over it.

A* is the easy part. Defining the walkable areas and the connections between them is the more challenging bit. Navmeshes are commonly used in 3d (or pseudo-3d) games. The latest version of Unity that I’m using supports runtime navmesh generation (and pathfinding on that navmesh), but it isn’t really suitable for me. It still requires 3d geometry (meshes or physics colliders). My source material is simply a 2d image with polygon boundaries:

 

Walkable area bounds

Walkable area bounds

 

A further complication is that Unity’s navmesh generation requires knowing the agent size (the radius of the character moving through the environment). So the agent’s “footprint” must be a circle. In Cascade Quest though, the characters’ footprints are wide and narrow (generally only 2 pixels high).

So basically I would rather just take my simple polygonal shapes (a collection of “barred” and “allowable” non-convex polygons) and generate a connectivity graph. One possibility is to generate a traditional navmesh by decomposing the polygon into triangles.

Another possibility is to use the concave vertices of a polygon. David Gouveia has a good article on this. Basically, by determining the interior line-of-sight connections between the concave vertices, you have all you need for pathfinding. Your start and end points (as long as they are within bounds) are guaranteed to connect to one of the concave vertices.

Here’s what this would look like for the screenshot above:

 

Concave vertex connections

Concave vertex connections

From there it’s a simple matter of determining the connection between the start/end points and the concave vertices, and then running A* over the graph nodes.

Implementation

My background image editor (and engine) support three types of polygons:

  1. Polygons that dictate where actors can be
  2. Polygons that dictate where actors can’t be (these may appear inside the above polygons – for example the tree and sign-in box in the screenshots).
  3. Polygons where an actor can be only if the start or end point is within them (used to navigate around dangerous areas (or areas where something might be accidentally triggered)).

For the purposes of the algorithm, type 3 can be considered like type 2 if the actor is outside of them (otherwise they are ignored).

So we basically have “yes, you can be here” and “no, you can’t be here” polygons. What’s more is that these can intersect. I could force a restriction that they don’t intersect, but sometimes I dynamically generate polygons in game (for obstacles that come and go, such as doors). It’s easier if I allow for intersection. However, the algorithms discussed above don’t work if the polygons intersect. Luckily, there is an easy-to-use polygon clipping library available.

With Clipper, I can ensure I have a bunch of non-intersecting polygons, from which I can then apply the above algorithms.

Here’s a gif of it in action:

 

Pathfinding

Pathfinding from one spot to another. The green star is the target, the red stars are the intermediate points.

Robustness

These algorithms generally function on floating point vertices. My game world deals with fairly low resolution integer values, however.

The “Pathfinding on a 2d Polygonal Map” article linked above mentions adding a tolerance to some of the calculations, but this really isn’t sufficient if your values are going to be rounded to integers and fed back into the algorithm.

In addition to paths determined by the method in this article, I also need to support traditional movement – that is, just moving in a particular direction. In this case, instead of finding a path from A to B, I need to ask the question “Am I in a valid area?”. I won’t get into the nitty-gritty details, but often a pathfinding result ends up making an actor pass through pixels that fail the “Am I in a valid area” test. Likewise, sometimes pixels that succeed the “Am I in a valid area” test result in the pathfinding algorithm failing because it can’t find a line-of-sight connection to any of the polygon’s concave vertices. To address these issues, I make the following concessions:

  • In the game, I don’t run the “Am I in a valid area?” test if the actor is following a path generated by the pathfinding routines.
  • When doing pathfinding, I always allow starting in an invalid area – it will generate a valid path that navigates the actor back into a valid area. This prevents the (terrible) scenario where the player would get stuck out of bounds.
  • To accomplish the previous bullet point, I need to start from a point in the polygon that is valid. I perform the “Am I in a valid area?” test with a zero tolerance. If it fails, I find the closest polygon edge, and push the point a tiny amount on the other side of that edge. It will then succeed in finding a line-of-sight connection to a concave vertex (necessary for successful pathfinding).

Other notes

With proper pathfinding, I can also allow the player to automatically and reliably navigate to various objects in the scene. For instance, if there’s an obvious and specific object on the ground (say, a wrench), I can automatically navigate the player over to the wrench to pick it up if they type “pick up wrench”. This is arguably better than telling the player they aren’t close enough, and require them to manually navigate there and retype their command. I’ll go over this in more detail in a subsequent post.

 

Getting a branch

Auto-navigating the player somewhere to perform an action.