The Ebonheim Chronicle

longpost

Or: The Least Critical-Path Feature I Could Have Possibly Spent a Day On

The General Plan

Localization has been on my mind a bit recently. It isn't part of the next milestone but I have always maintained a loose plan of action and so I sometimes revisit the idea to make sure I'm building the engine in such a way that translations will be simple to implement.

Most of the work to support other languages just has to do with text-replacement. There's text in-code, in-asset, and even, although rarely for this game, in-image that would all need run-time substitutes for other languages.

The vision for supporting this revolves around an asset-driven approach much like the rest of everything else in the game. Essentially, user-facing text will be a special object that understands the context it exists in but still is just a text box in the game's various asset UI's.

From there, we can collect all of them from all assets into a big spreadsheet with a different language in each column to define their replacements. This has a ton of benefits:

  • Single source to work in and review remaining changes
  • Live-updating game instance to see the changes in-context
  • Sort search and edit across all assets and know their context at a glance
  • Set of translations is a separate asset that can be applied or overridden the same as other assets

For hard-coded text in-engine, a macro is used to automatically register those strings with their own contexts to a core-generated part of the translation asset.

This won't be difficult to implement and should come together pretty quickly. Whoever gets contracted to do the localization should have a painless experience working in the game directly to get everything done!

As long as they only ever need 8x8 ASCII characters which should absolutely never be a problem!

The Problem

I don't know Japanese. But I have been around video games long enough to know that the further back in time you go, a problem that gets harder is rendering Japanese text.

My pseudo-fantasy-EGA engine is fully software-rasterized with every pixel of my 712x292 frame buffer being drawn on the CPU. Text is drawn by taking a font bitmap, grabbing a cached recolor of it for when you want different colors, and rendering it similar to a sprite atlas with the single-byte character code being an index into the grid. On the original EGA, these characters were baked into the hardware of the card and in cases of text modes you could only draw those characters on a pre-set text grid. The limits of my renderer are much more lax than that and I have some pretty generic freedom for where and how to draw text.

So when I started to think about supporting Japanese (or really any non-roman script) I thought ok, maybe I can just have a new font bitmap that has the new characters in the same 8x8 character resolution. If you just pull from the appropriate index in the (much) larger image it'll work the same as ASCII for all of the UI and content-responsive layouts and translators can just write Unicode into the translation tool.

The Misaki Font

After some googling (an increasingly rare way to start a sentence) I came upon this great Language Exchange comment talking about an 8x8 Kanji font. The responder pretty clearly says to not do this and also nicely adds some helpful screenshots and even talks about how fucking Dragon Quest on the 3DS had people unable to tell the difference between three different types of swords from the tiny text.

Never one to trust an expert opinion, I did not read any of that and hastily clicked the link of what not to use, bringing me, just as I assume many before me, to the Misaki font.

Chrome auto-translated the page which was helpful because I can't read any of it. Sorry, why was I doing this again? Oh right, localization. Skimming the page I saw that the font was created in the 90's for use with the Sharp PC-E500 Pocket Computer and also that the font was actually just 7x7 so it could have a single-pixel gap to make the characters not bleed together. I got the impression talking to others and seeing comments that most of these Kanji are just about impossible to discern on their own, requiring the context of the full sentence to infer.

Nevertheless, they had a PNG atlas to download the whole thing in one image, which is what I did for ASCII so I decided to start there!

A Quick Note on TTF

The website also makes a .TTF available, and using the incredible single-include library stb_truetype I could theoretically render the characters to a virtual bitmap and transfer those to the EGA framebuffer at runtime for text rendering. Applied to my other regular font-rendering this would open the door to variable font sizes and a lot of flexibility.

I played with this idea and even got a basic version working, but I was a little frustrated dialing in the precision for specific pixel sizes and getting the characters to render exactly the way I need them. I also think that the current text limitations are one of those creativity-producing limitations rather than one of those annoying limitations so I gave up on the idea.

So you have 8,836 Characters...

For ASCII there was just a nice 32x8 character grid and the unsigned char value of a character directly corresponded to an index in that 2D array. The Misaki font PNG was 94x94 with a ton of blank areas and both rhyme and reason to an organization that I had zero knowledge of.

I looked for documentation, text files that say what characters correspond to what grid positions, or even wondering if it was related to Unicode at all. Nothing. And it's not like I was going to try and figure out what characters were which to build it myself.

Feeling stumped at lunch, I vented my despair to @SP, Developer of Super Puzzled Cat, Launching January 2025 with a FREE DEMO you can play TODAY just in time for Steam Next Fest, and he half-remembered something about something called “Shift-JIS.”

This was the search term I needed, and I quickly found myself at the Wikipedia article for JIS X 0208, a two-byte Japanese Industrial Standard first written in 1978. The page is extremely helpful because it breaks down every “row” of characters in the 94x94 grid and I was able to confirm that it lined up perfectly with the Misaki font PNG. The way the encoding works is that to reference a character, you need it's kuten (区点) which refers to the two numbers, row-column, that act as cell coordinates in the 94x94 grid! Easy!

Mapping to Unicode

In the modern era, our main way to input non-ascii characters is with Unicode, assigning all characters 1-4 bytes as “codepoints” with encoding schemes for representing them in memory. I'm going to try hard not to get too into Unicode in the post because it will reveal how little I know about it.

Of course, following JIS X by over a decade, Unicode has exactly 0.000 correlation or overlap with JIS X. I'll repeat that for ascii we can just take the single-byte character from any string in C and index it into our little font map and be good. For Japanese Unicode codepoints from an asset file to correctly resolve to the correct kuten we were absolutely stumped without having a 1:1 map.

Luckily, Unicode has a website located somewhere deep beneath New York's Trinity Church hidden behind an elaborate series of thematic clues and puzzles. It contains a page with an ftp link to the older original JIS 0201 Mapping. I couldn't get the link to work but maybe I was FTPing wrong. (Update: Mastodon User @gamedevjeff was able to find the moved FTP link at ftp://ftp.unicode.org/Public/MAPPINGS/OBSOLETE/EASTASIA/JIS/JIS0208.TXT)

Regardless, I really wanted 0208. On a lark I googled for the theoretical filename “JIS0208.TXT” and bingo! There, on Google's Github for their Japanese IME was the exact file I was looking for!

Written in 1990, the Unicode file contains 7000 lines of this:

0x8140	0x2121	0x3000	# IDEOGRAPHIC SPACE
0x8141	0x2122	0x3001	# IDEOGRAPHIC COMMA
0x8142	0x2123	0x3002	# IDEOGRAPHIC FULL STOP
0x8143	0x2124	0xFF0C	# FULLWIDTH COMMA
0x8144	0x2125	0xFF0E	# FULLWIDTH FULL STOP
0x8145	0x2126	0x30FB	# KATAKANA MIDDLE DOT
0x8146	0x2127	0xFF1A	# FULLWIDTH COLON
0x8147	0x2128	0xFF1B	# FULLWIDTH SEMICOLON
0x8148	0x2129	0xFF1F	# FULLWIDTH QUESTION MARK
0x8149	0x212A	0xFF01	# FULLWIDTH EXCLAMATION MARK
0x814A	0x212B	0x309B	# KATAKANA-HIRAGANA VOICED SOUND MARK
0x814B	0x212C	0x309C	# KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK
0x814C	0x212D	0x00B4	# ACUTE ACCENT
0x814D	0x212E	0xFF40	# FULLWIDTH GRAVE ACCENT

The first column is the Shift-JIS code (A modern extension of JIS), the second column is the JIS 0208 kuten and the 3rd is ✨The Unicode Codepoint✨

Writing a Parser

I should do a whole blog post about how I parse text these days, but about 5 years ago I stopped dreading file parsing forever when I started using “Accept-style” recursive-descent parsing. Whipping up a few lines to load this entire file into a hash-map at runtime only took a few minutes!

Here's the complete code, I've annotated it with a bunch of extra comments to explain what everything does.

// read 4 hex characters and shift the nibbles into a single 16-bit number
// returns true on success, populating out
static bool _acceptHexShort(StringParser& p, uint32_t* out) {
   auto snap = p.pos; // error recovery snapshot

   uint32_t workingNum = 0;

   // 4 nibbles
   for (int i = 0; i < 4; ++i) {
      char d = 0;
      if (!p.acceptAnyOf("0123456789ABCDEF", &d)) {
         p.pos = snap;
         return false;
      }

      // convert the char to a number
      if (d >= 'A') d = d - 'A' + 10;
      else d -= '0';

      // shift it into place
      workingNum |= d << ((3 - i) * 4);
   }
   *out = workingNum;
   return true;
}

// contains the 3 numbers in a line of the map file
struct JISMapLine {
   uint32_t shiftjis = 0, jis0208 = 0, unicode = 0;
};


static void _constructUnicodeToJISMap(sp::hash_map<uint32_t, Int2>& mapOut) {
   auto file = bundledFileString("JIS0208.TXT"); // in-house file-bundler, the txt is encoded inside the exe and sitting in memory at this point
   StringParser p = { file.c_str(), file.c_str() + file.size() };

   while (!p.atEnd()) {
      if (p.accept("0x")) {
         // start of new line

         JISMapLine line;
         if (_acceptHexShort(p, &line.shiftjis) &&
            p.accept("\t0x") && _acceptHexShort(p, &line.jis0208) &&
            p.accept("\t0x") && _acceptHexShort(p, &line.unicode)) {

            // we grabbed the 3 numbers, split the jis0208 into two nibbles
            // the kuten start at 0x20 (32) and are 1-based

            auto row = (int)(((line.jis0208 >> 8) & 0xFF) - 0x20);
            auto col = (int)((line.jis0208 & 0xFF) - 0x20);

            // map this 2d integer point to the unicode codepoint
            mapOut.insert(line.unicode, Int2{ col - 1, row - 1 });
         }
      }

      while (!p.atEnd() && !p.accept('\n')) p.skip(); // skip to end of line
   }
}

Int2 JISCellFromUniChar(uint32_t unicode) {
   static sp::hash_map<uint32_t, Int2> _Map; // in-house hashmap
   if (_Map.empty()) {
      // populate once per program run
      _constructUnicodeToJISMap(_Map);
   }

   // this hashtable is way faster than std::unordered_map so this is fine
   if (auto srch = _Map.find(unicode)) {
      return *srch.value;
   }
   return { -1,-1 };
}

So We're Done! Almost...

With our shiny new JISCellFromUniChar function we can pass any codepoint up to 4-bytes and get a supported kuten for referencing a cell in our Misaki PNG.

But there is the tiny issue of getting those codepoints. Again, I'm not going to get into Unicode too much here but the main thing is that a utf8 string is still just a null-terminated const char* in your code but you can no longer just read it one byte at a time. Instead, every time you go to read a character, you can check specific bits to see if the character is continuing into the next byte. There are great small libraries for traversing a utf8 string but I had never written one before so here's mine...

const char* utf8ToCodepoint(const char* input, uint32_t* codepoint) {
   auto s = (unsigned char*)input;
   if (s[0] < 0x80) {
      *codepoint = s[0];      
      return input + 1;
   }
   else if ((s[0] & 0xE0) == 0xC0) {
      *codepoint = ((s[0] & 0x1F) << 6) | (s[1] & 0x3F);
      return input + 2;
   }
   else if ((s[0] & 0xF0) == 0xE0) {
      *codepoint = ((s[0] & 0x0F) << 12) | ((s[1] & 0x3F) << 6) | (s[2] & 0x3F);
      return input + 3;
   }
   else if ((s[0] & 0xF8) == 0xF0) {
      *codepoint = ((s[0] & 0x07) << 18) | ((s[1] & 0x3F) << 12) | ((s[2] & 0x3F) << 6) | (s[3] & 0x3F);
      return input + 4;
   }
   *codepoint = 0xFFFD; // invalid
   return input + 1;
}

Finally, it's time to actually render the characters. We traverse our utf8 string, pull out the codepoints, look up the kuten, and build a UV rect for the font texture:

void egaRenderTextSingleCharUnicode(EGATexture& target, EGATexture& font, Int2 pos, uint32_t codepoint, EGARegion* clipRect) {
   codepoint = convertAsciiCodepointToFullWidth(codepoint);
   auto cell = JISCellFromUniChar(codepoint);
   if (cell.x >= 0 && cell.y >= 0) {
      Recti uv = { cell.x * EGA_TEXT_CHAR_WIDTH, cell.y * EGA_TEXT_CHAR_HEIGHT, EGA_TEXT_CHAR_WIDTH, EGA_TEXT_CHAR_HEIGHT };
      egaRenderTexturePartial(target, pos, font, uv, clipRect);
   }
   else {
      // err
      egaRenderLineRect(target, Recti::fromVecs(pos, Int2{ EGA_TEXT_CHAR_WIDTH, EGA_TEXT_CHAR_HEIGHT }).expand(-2, 0), EGAUIColor_ltred, clipRect);
   }
}
void egaRenderTextUnicode(EGATexture& target, EGATexture& font, Int2 pos, const char* text_begin, const char* text_end, EGARegion* clipRect) {
   if (!text_end) text_end = text_begin + strlen(text_begin);

   auto cur = text_begin;
   while (cur != text_end) {
      uint32_t cp;
      cur = utf8ToCodepoint(cur, &cp);
      egaRenderTextSingleCharUnicode(target, font, pos, cp, clipRect);
      pos.x += EGA_TEXT_CHAR_WIDTH;
   }
}

And then we let 'er rip and prayed! At this I had absolutely no way of knowing if the values inside the map were correct or garbage or what.

      auto uniFont = egaFontFactoryGetFont(gi.state.fontFactory, EGAUIColor_black, EGAUIColor_white, EGAFontEncodingType_Unicode);
      egaRenderTextUnicode(gi.ega, *uniFont, Int2{ 8,8 }, u8"あなたの国は影に屈することになるだろう");

And it worked!

The folks at Nice Gear Games were nice enough to translate my overdramatic save-deletion dialog so that I could test it out:

One great thing with this is that my rich-text rendering still works! So text color and inline icons are already perfect:

One Last Hang-up

The font also has roman characters, so I went ahead and tried to write a regular English message but none of the characters rendered. Well, sure enough, the codes represented by JIS are the Full-Width roman characters which have different codepoints than what ASCII maps to.

So yet another function for catching those full-width conversions:

uint32_t convertAsciiCodepointToFullWidth(uint32_t c) {
   if (c == ' ') {
      c = 0x3000;
   }
   else if (c >= '!' && c <= '~') {
      c += 0xFEE0;
   }
   return c;
}

Maybe That Person Was Right About 8x8

After the excitement of getting this all working wore off I did start to notice/hear that the text is very hard to read. The Misaki font page does have an 8x12 font that is more readable so I went ahead and tried tossing that into the game. Now, this one's a bit more involved because the 8x8 font size is hard-coded and so changing it for this messes up a lot of things. I would be a bit more work to actually update the UI to support variable-height text. But I'm happy to show that the larger font size works just fine with the content-response UI in the game and looks really slick:

Thank You For Reading!

I really wrote a lot here but this was such a fun little project to get sniped by! As always, if you'd like to reach out or discuss the content here, you can reach out to me on Mastodon or else reply to the post about this post which I'll link here. Have a great day!!

Now all I need to do is finish making the game so that somebody can translate it someday!

#gamedev #longpost #chron4

| 🌐 | 🙋‍ | @britown@blog.brianna.town

When I decided to make a turn-based JRPG for Android in 2010, my initial thought was that it would be simple. After all, being turn-based, it wouldn't have complex physics or real-time issues and the simple art-style would make it a breeze.

Obviously, having never attempted to develop a complete game to release before, I had no idea what I was talking about, and indeed that game never went further than a moderately-successful demo.

Making games being fundamentally impossible aside, the key misunderstanding I want to highlight here today, is the disastrous assumption that turn-based games have simpler logic.


The Naïve Approach

AKA: How BladeQuest did it

So it's your party's turn to act in the game execution loop. You're looping, waiting for the player to make an input on their decision for the action they wish to perform. This involves UI: menus, clicks, confirms, cancels, etc. but the game state isn't fundamentally changing. Being turn-based (ignoring Active-Time-Battle shenanigans), the enemies aren't attacking, you're not taking damage, the player has as much time as they need to make their decision and confirm it.

In the code, at the basic level, once the confirmation of the decision is made, the game state is affected. Damage calculations are ran, defensive stats are considered, numbers are created and then they are applied to health bars through judicious addition and subtraction.

Of course, just updating these numbers and marking dead baddies as dead isn't very exciting so you need to do some animations. When an animation is playing, your game loop needs to understand that something is currently blocking further execution and do nothing, waiting for the animation to end.

So maybe you do something like this, for each character in the turn-order:

  1. First we want them to slide out of the party lineup to an acting position, so while this slide is happening, update their drawn position every frame based on a time step
  2. If they're at the acting position, change their sprite to an “acting pose” and yield every frame until an amount of delay time in that pose has passed
  3. If the posing delay is done, start the animation sequence for the selected action, creating particles, showing shapes, manipulating sprites
  4. During the animation, pegged to specific points or maybe just after it's done, apply some damage to a target
  5. Calculate the damage on the target enemy and add a “damage marker” to the draw state which will show that number bounding in front of the target
  6. Once the bouncing is done, actually apply the number to the enemy behind the scenes and see if they died
  7. If they died, start a death animation and wait each frame until that's complete
  8. If all the animations that have been started are complete on the current frame, slide the character back into the party line, waiting and updating position by timestep
  9. Now increase the turn index so the next character in the turn order goes

Why Doesn't This Work

It does work! It even worked in BladeQuest to make a successful demo!! But good god did we have trouble.

The biggest problem with this is state management between frames. An immense amount of state is needed for every part of this to know where particles are, where to draw the actors, what part of the turn they're on, etc. so that each frame your game knows whether to do something, draw something, or yield.

One of the hardest forms of this state tracking is timers. There could be timers for screen shaking, screen flashing, moving to poses, animation delays, marker bounces, all checking against their own internal clocks for when they're done. While nice and modular in theory, individual systems having their own internal timers requires them to sync with each other and communicate their status because they are often temporally blocking game state changes from taking place.

If actually taking the damage shouldn't happen until after an animation is finished, the line between render and update blurs, violating the golden rule of never allowing your game render to modify your game state.

If an attack critical-hits, you can pop off a quick screen-flash with a line of code, but what if that attack gets cancelled or blocked? Do you remove the flash you added? What if you want to play a special animation before the crit gets applied? You'll have to calculate if the crit will happen first, play animations with special state exceptions to wait for them to finish, and then calculate your damage and apply it. In the end, execution order winds up mattering a ton here.

In terms of scaling, special exceptions for new features start costing exponential dev efforts to glom onto this system. Want to add counter/parry/interrupt ability? Enjoy digging up every waiting-for-animations-to-finish call to see if it needs to handle a cancellation. We had an item called a Safety Ring which would prevent a fatal hit and the edge cases around a definitely-dead character not actually being dead were so numerous that we were still fixing Safety Ring bugs a day before the demo launched.

The Atomic Turn

Ok not actually technically atomic, but it sounds cooler.

I've been a hinting a little about a possible solution to the largely temporal issues with state management for a turn-based game. The biggest successes I've had with Chronicles development have been in identifying proper separations of concerns:

  • Separate your data from your logic
  • Separate your UI from your data
  • Separate your update from your render
  • Separate your device platform from your semantic inputs

The problem with the system describes above is that, display/animation/aesthetic/presentation is interleaved with execution logic.

What if we wrote a function that just executes the entire turn in a single function call, one frame, “atomically”. We can loop over the characters in the turn order, skip all presentation, determine the outcomes of all the actions and decisions, and apply them to the game state.

New features and exceptions can be written into this execution function much more easily, because they don't have to contend with timings and waiting. At every point in the execution of this function, the current game state is the exact correct state of all participating characters. character.health is correct at the time you check it because you're still in the same frame and same function call you started executing the turn in!

Let's make this even more useful, by applying some of the functional programming concepts I talked about ages ago, and say that our turn execution function should take a const Game State and return a new, post-turn GameState. Now we're not even modifying the rendered state, we're just running the turn like we would call any other function. This means, we could actually execute the complete turn as a perfect simulation, and inspect the resulting state to derive what happened (or, what is about to happen).

But What About All My Animations??

Of course, just updating these numbers and marking dead baddies as dead isn't very exciting so you need to do some animations.

Rather than letting the presentation timings drive our state changes, we're going to use the state changes to set up our presentation timings.

After making a decision for your character in Chronicles, the turn plays out in front of you. Here is the total code in the game update that is happening for every frame of that execution:

void turnExecuteStep(GameState& g) {
   auto& turn = g.turn;
   assert(turn.stage == TurnStage_Executing); // not executing!!

   if (g.step >= turn.startStep + turn.totalTurnLength) {
      turn.stage = TurnStage_Finished;
   }
}

The reason for this is that, while the atomic execution function is executing, in addition to the game state being updated, timed render logic is being added to a set of timelines to block out the turn.

Here is an example of the function that is called whenever applying any damage to another actor during the turn execution function. Play close attention to the second half where all of the functions take some form of a when parameter:

void ActorExec::applyDamage(GameState& g, ActorHandle sender, ActorHandle receiver, WorldGridCoords3D receivePos, ActorDamage const& dmg, ActionApplyMode mode, StepCount when, ExecutionTimeBlocks& blocks, bool blockAnims) {
   auto& cons = *getCurrentGameConstants();
   auto a = receiver;
   if (actorAlive(g, a) && dmg.dmg > 0) {
      auto inflicted = actorApplyDamage(g, a, dmg);
      int totalInflicted = inflicted.health + inflicted.armor + inflicted.stamina;

      auto dmgCpy = dmg;
      dmgCpy.dmg = totalInflicted;
      turnRecordDamagedEvent(g, sender, receiver, dmgCpy);

      bool killed = false;
      if (auto act = g.save.actors.find(a)) {
         if (act->health <= 0) {
            killed = true;
         }
      }

      // first we animate for hurtLen
      // then show dmg number
      // then we show death fade

      StepCount msgEndStep = 0;

      StepCount hurtLen = 0;

      if (mode == ActionApplyMode_Execute) {

         // dont show hurt animation if they took 0
         if (totalInflicted > 0) {
            hurtLen = cons.hurtPaletteLen;
            gamePlaySound(g, CoreAsset_SFX_Damage, when);
            actorAddDamagedAnimation(g, a, when, when + hurtLen);
            if (a == g.save.player_controlled) {
               gameShowDamagePaletteFlash(g, when, when + hurtLen);
            }
            actorSetDrawnStatus(g, a, actorCalcStatus(g, a), when + hurtLen);
         }
      }

The first thing we do is we actually apply damage numbers to the receiving actor, and record it in a log used for interrogating simulations like I described earlier.

mode is a way to determine “Preview” vs “Execution” where the former can nicely skip all the presentation-related side-effects of the function.

The important presentation parts here are gamePlaySound, actorAddDamagedAnimation, gameShowDamagePaletteFlash, and actorSetDrawnStatus

These functions all take begin and end frame counts because they don't execute immediately! All of these will happen at their requested step counts during that execution phase above because we're just waiting every frame until we hit turn.totalTurnLength!

So you see, we execute the entire turn logic in a single function call, and it sets up a perfectly-synced, interleaving keyframe-style timeline of what the render function should show every frame during the execution.

TimeBlocks

A very handy tool for organizing frame timings is this simple TimeBlocks struct:

struct StepBlock {
   StepCount begin, length;
};

struct ExecutionTimeBlocks {
   sp::list<StepBlock> blocks;

   operator bool() const { return !blocks.empty(); }

   StepCount end() const {
      StepCount out = 0;
      for (auto&& b : blocks) {
         out = std::max(out, b.begin + b.length);
      }
      return out;
   }
};

With syncing animations, you often need to have a complicated balance of blocking and non-blocking animations, and you want child calls and dependent timings to not care about the parent. Maybe I want to fire 100 random arrow particles, all starting and ending at random times. I don't care about any indivudal arrow but I don't want to continue to the next step until the last arrow is done, so you can use these TimeBlocks!

What's nice there is you can then pass around these time blocks to any number of modular functional functions that just push their little blocking time gaps into the set and at the end the calling parent can easily determine the final frame count of the final item.

Here's the full function taht gets called if the actor is performing a move-attack:

static StepCount _executeMoveAttack(GameState& g, ActorHandle user, ActionToken token, ActionTokenSet const& set, ActionTokenMemory const& mem, ActionTokenIterationCache const& cache, ActionApplyMode mode, StepCount when) {
   auto startStep = when;
   ExecutionTimeBlocks timeBlocks;

   for (auto&& result : actionMoveAttackCalculateResults(g, user, token, set, mem, cache)) {
      auto dirVec = dirVecFromCoords(result.origin, result.targetTile);
      auto& tState = g.turn.actors[user];

      switch (result.result) {
      case MoveAttackResult_ActivateDoor: {
         ActorExec::toggleDoor(g, user, result.targetTile, mode, when, timeBlocks);
         _pushFoVEventTorchEquipped(g, when);
      } break;
      case MoveAttackResult_Attack: {
         _executeAttack(g, user, result, mode, startStep, timeBlocks);
      }  break;
      case MoveAttackResult_Move: {
         sdh_each(g.turn.actors[user].turnResults) { if (it->type == ActorTurnResult::MoveAttacked) sdh_mark_erased(); }
         tState.turnResults.push_back({ ActorTurnResult::MoveAttacked, result.origin, dirVec, { result.targetTile }, false });

         gamePlaySound(g, CoreAsset_SFX_Move, startStep);

         if (!result.dodgeLocks.empty()) {
            ActorExec::activateDodgeLocks(g, user, user, mode, startStep, result.dodgeLocks, timeBlocks);
         }

         auto dist = int2ManhattanDist(result.origin.xy(), result.targetTile.xy());
         ActorExec::slide(g, user, user, dirVec, dist, true, mode, std::max(startStep, timeBlocks.end()), g.turn.turnLength, timeBlocks);

         turnRecordMoveEvent(g, user, user, result.targetTile);

      }  break;
      }
   }

   return std::max(startStep, timeBlocks.end());
}

The important takeaway here is this function returns a StepCount meant to signify the “End Step” of this action. The turn execution function is going to use that end step as the start step of the next action in the queue.

So we use a local timeBlocks here and pass it to any number of ActorExec:: functions similar to applyDamage above. These Exec functions often call other Exec functions recursively as sliding and taking damage often causes more sliding and more taking damage. From the MoveAttack's perspective, we don't really care, because whatever happens, it's just filling up our timeBlocks which we can just return the end() of as our last step!

Finally, here is an excerpt from the function executeTurnAction we've been talking about all this time:

      if (!disabled) {
         actorSpendStaminaForAbility(g, m, ab.ab);
         auto drawnStatus = actorCalcStatus(g, m);
         drawnStatus.stamina_recovery = 0;
         actorSetDrawnStatus(g, m, drawnStatus, currentStep);

         _beginAbilityCooldown(g, m, ab.ab);

         size_t idx = 0;
         ActionTokenIterationCache cache;

         while (!actionTokensAtEnd(set, idx)) {
            actionTokenLinkIterationCache(set, cache, idx);
            //turn.activeActors.push_back({ m, g.turn.turnEndStep, (int)idx });

            auto tok = set.tokens[idx];
            gameStateCalculateActionTokenDecisionCache(g, m, tok, set, mem, cache);

            if (mode == ActionApplyMode_Execute) {
               currentStep = gameStateApplyActionTokenForExecution(g, m, tok, set, mem, cache, currentStep);
            }
            else {
               gameStateApplyActionTokenForPreview(g, m, tok, set, mem, cache);
            }
            ++idx;
         }
      }

      auto actorEndStep = currentStep;

      auto actorLen = actorEndStep - actorStartStep;
      actorLen = std::max(actorLen, cons.turnMinimumLength);
      turn.executingActorTimes.push_back({ m, actorStartStep, actorStartStep + actorLen });

      auto nextActorStart = std::max(actorStartStep, actorStartStep + actorLen + turn.nextTurnDelay);
      currentStep = nextActorStart;

      actorBlocks.blocks.push_back({ actorStartStep, actorLen });
   }

   auto turnEnd = actorBlocks.end();

For a given actor, we execute their turn actions, starting with our currentStep which, by the end of the action list, contains the final step in that actor's execution. Then we can do some simple logic to apply minimum lengths and determine the start step for the next Actor.

We have another TimeBlocks to keep track of one block per actor and after we're done we just query end() to get our final turn step!

A Note on the Render Function

I require and recommend that render always take a const GameState and draw the entire game in an immediate-mode fashion.

For the given step, we can determine what damage indicators to draw, what palette to use, where to draw the characters, what animation primitives to draw, and even do simple lerping and easing from the various begin and ends. The timeline you are building during exeuction must be completely deterministic such that all you need is your GameState and a current StepCount to draw everything about that frame perfectly!

In Conclusion

All of this is to say that, in Chronicles, the entire turn is executed atomically in a single frame which then sets up a complex timeline of render data to display the animated turn execution.

A great deal of thought and care went into this implementation and it required a lot of discipline to see it succeed. It is certainly a pain in the ass whenever a new system needs to create some new frame-delayed timeline system instead of just being able to happen instantly, but the result is an incredibly stable and scalable framework that I can add extremely complex combat logic to ad infinium.

I hope you enjoyed this write-up! If you have any questions or comments, you can DM me on Mastodon or just reply to the post about this!

Have a great day!

#chron4 #gamedev #longpost

| 🌐 | 🙋‍ | @britown@blog.brianna.town

It's been a really wonderful week of getting my combat system into the hands of interested parties! I want to take some time to pontificate on the last 14 months as well as talk about the future.


Slow and Steady

I had largely given up on personal game development over the last few years. It really just demanded so much of my time and never ended in anything but disappointment. I also now have far more commitments throughout the week than I ever have before so it really started to feel like there was just no room for it anymore.

So when I was bit with the bug of this game, it was time to employ some new strategies.

This biggest difference between this project and every other attempt at making games in my life has been a fairly tight budget for time to work on it. There's a lot of life that's been happening and so I can't just go and dump 80 hour weeks into it. I've had to learn how to cope with the knowledge that large systems are going to potentially take weeks and that there's going to be fairly big gaps where I don't really get to touch it.

Once I began to make my peace with it, I found a much healthier relationship with my side project. I always coded and designed with the expectation that something might not come together for a while, and optimized for prioritizing systems that I immediately need rather than dumping large times into theoretical architecture. I also haven't, so far, burned completely out on the immense feature list by burning the candle on both ends week after week.

The fact that I could take a few months off and play Dwarf Fortress or get distracted with Tears of the Kingdom and still just come back and keep going with it is a testament to a better relationship with the work.

The Combat System Milestone

Early into the project, I identified that the scope was just enormous and had so many ideas going on. I knew it was too much for a single person to accomplish in any reasonable timeframe but also I knew that this was largely a hobby project for my own curiosity; something to tinker with in the evenings.

As I became more excited about the overall design that was brewing, I decided to break it into completable chunks to make progress somewhat realistic to track. Knowing at the time that I was prone to abandoning projects and could very well never see this dream completed, I wanted to design standalone games that would have ~80% overlap with the final vision. If I never completed the game, at least I'd have something to show for it.

At the heart of Chronicles' design is the tile-based combat. The initial elevator pitch was “Nethack, except with Disgaea-like tactics abilities, and Into-the-Breach's perfect turn information.” I also employed the health and stamina system that I had designed for a previous unseen real-time dungeon crawling project which fit perfectly into the gridded turn-based environment.

Regardless of how big the world is or how the character progression works out, the central heart of the game's success to me was making the combat fun. I hate kiting enemies into hallways in roguelikes and wanted something better. So if I couldn't make that elevator pitch work, there wasn't much use in continuing the project.

And so I wrote out the essential pieces necessary for making a standalone “Combat-Puzzles” game where each level you had to use your abilities to defeat the enemy formation. I forbade myself from thinking about shops, and world maps, and faction turns, and equipment, and instead focused on an asset-defined combat actions system for creating complex abilities, UI for turn result messaging, and creative enemy AI.

By far the biggest success of this way of doing a milestone is that I had to make an actual game around the combat so splash screen, menus, tooltips, and audio. I had to do a ton of shell architecture to actually present the combat to players so being forced to do that this early in the process was extremely beneficial!

Playtesting

I wanted to get feedback on whether or not the combat was fun from people without needing the larger structure to be defined. I was really self-conscious about the demo just coming off as a weekend game-jam game and I wanted to clearly denote that it's building toward a larger goal. I tried to call it a mechanics playtest more often than calling it a demo because felt weird to solicit feedback from something that is only tangentially related to the final product.

To help get feedback in this specific context, I solicited volunteers to playtest via a google form and then I chose to publish to a password-protected itch page, only sending out access to people who volunteered to help test. To me this encouraged participation from people who have been following this blog or otherwise have a personal interest in the project. This helped me with feeling anxious about people not understanding the goal of the release and it helped me organize and track feedback and issues.

I wanted to create a direct line of communication to people, so that I could address their individual needs and encourage sharing any thoughts. A big thing that helped a ton in this respect was to provide a list of feedback concerns and questions on the itch page. Nearly everyone who responded with feedback directly addressed those points! It was extremely effective to include those prompts!

I'm so incredibly thankful to the wonderful people I've gotten to meet and interact with this week!

Replays

I've designed my game with functional determinism in-mind and one of the big gets out of that is that it's easy to record user inputs in a game session and then simply replay them on my own copy of the executable.

The replay files are a compressed format including mouse movement and semantically-named game inputs all attached to frame-counts. By running the same version of the game with the same asset stack, I can replay those inputs and watch the player's session seamlessly. And since it's running the actual game I can

  • Hit break points in the code to track down bugs
  • Rewind to previous game states
  • Live-hotpatch the running code with Live++
  • Start playing myself from any point

This is enormous not just for focus testing but also for fixing bugs. I can see a bug in a replay and implement a fix for it all without restarting the program!

The final piece that made this perfect is including OpenSSL and DropBox's API to automatically upload the .chrep files to my DropBox for review. Naturally, I made sure to include a disclaimer at the top of the demo allowing the player to opt-out of the automatic upload. This system worked like a dream with replays from playtesters popping up on my DropBox within hours of sending out copies!

Often, by the time a tester had sent me their feedback email I already watched their session and could directly tie their experience to their feedback as though I had been standing behind them.

Cross-Platform

I recently posted about using Zig's build system to create cross-compilation targets for other platforms. This cost me nearly two weeks in mid-October but it wound up being a huge success! Although I could never get the auto-upload to work on those platforms, plenty of volunteer testers from Mac and Linux reached out to me manually sending me their replays.

I now have a simple batch file in my repo for building Windows, Mac, and Linux game versions all from Windows! M1 Macs coming soon!!

What's Next?

In between responding to testers and writing down notes and fixes, I've been doing a lot thinking about what my next actual development session even looks like.

As I crept closer to the combat demo being finished, I started to taste some of that forbidden fruit thinking about the larger game structure. I'm calling the next milestone “The Early-Game Demo” and is meant to be fairly representative (~80%) of the final product.

As opposed to the combat demo's level structure, this will have the run-based rogue-lite elements and be a showcase for the persistent world. I'm hoping that the full final 1.0 game release someday down the line will essentially consist of the Early Game Demo just with a ton more stuff!

Right now I'm putting my Project Manager hat on to try and build out a roadmap and prioritize all of the systems that need to get built. It's starting to sound like the first major task is replacing the current map system with a world-map grid. More on that another time!

Thank you!

To everyone who has followed my little side-project or gotten anything out of these blog posts! Whether you helped playtest, left a kind comment once, or just smashed that fave button, I really get so much joy sharing this thing with these little corners of internet!

Have a great day! :eggbug:

#gamedev #chron4 #longpost

| 🌐 | 🙋‍ | @britown@blog.brianna.town

This has been living in the back of my head for weeks and I haven't had the bandwidth to devote to solutions but here's a #longpost about the specifics and some other thoughts.


First off, how do targets and actions work?

Main Article: The Big Complicated Chronicles Actions System
An ability executes a list of Action Tokens in order which compiled from an asset[1]. One type of token can be a Target Request which is a ton of configuration options for what is selectable in that request; range, direction, line of sight, that sort of thing. The two main genres of target request are defined by their decision-type, which is either “Tile-Pick” (ie. pick a tile within 4 tiles of the user) or “Directable” (ie. N, S, E, W)

  1. I say “asset” here because it's not super relevant, you could also call it an action script, but for 99% of cases I'm dealing with now the action script is 1:1 equivalent to the compiled token set.

Other tokens like Damage or Push will then reference the results of a previously-declared target request token.

Why does this make behavior hard?

Calculating AI for “Move/Attack” is trivial because it's a single directional target request and you just plug in the direction of the player, but that's hard-coded. Something like “Blink Strike” may have a token list more like this:

  1. Declare target tar which is any occupied tile within 10 tiles of the user.
  2. Declare target dir which is a range-1 directable empty location directed from the result of tar
  3. Play “blink” animation from origin to dir
  4. Teleport actor at origin to dir
  5. Play “attack” animation from dir to tar
  6. Damage actor at tar

You can read this action set and get an idea of what the ability does but ask yourself, how do you programmatically reason with this ability's function? The token system is granular and abstract enough that extremely complicated multi-stage dependent abilities can be made with it, which is an important part of the variety I want in the abilities in the game!

How does the NPC behavior decision step determine which tile position to plug into tar and what direction to plug into dir??? Why did it even pick Blink Strike in the first place? How did it know it could possibly be in range to use that? How do you even reason that one target will move you but another will damage a target? Maybe you can start to picture some high level solutions here where you need to break down the state of the board to determine what is available but guess what? You cannot know what tiles are available to pick for dir until you simulate the entire turn up until that point! Decisions are made based on the expected board state at the time of the actor acting, and then saved until all decisions are made.

A 3-step targeted ability consisting of a tile-pick within 10 tiles away, followed by a dependant direction, followed by 5-away tile pick gets you to 50 thousand possible decisions, and it explodes from there if you want to do multi turn solving. If each combination requires you to copy the game state to simulate the outcome, well you're sunk!

How do you select what ability to even use?

There's a very large spectrum of how the enemy behavior will ultimately work. The dream was to be able to define “Personality” assets where the actor will have a set of goals and will be able to reason with the abilities they have and their resources and cooldowns to determine correct usages of those abilities to accomplish the goals of themselves or their faction. I still think this is attainable!

There's also a lower-tech approach, which is very simple rules-based priorities. A behavior can be a list of states in priority order. Each state would contain:

  1. A list of conditions to satisfy that are easy to calculate: “further than X tiles away from enemy”, “not in line-of-sight from enemy”, “there's a nearby empty choke-point”, etc.
  2. An explicit ability to use: “move/attack”, “blink strike”
  3. A list of desired outcomes that are easy to calculate: “Target attacked” “User moved closer to target” “User is covering choke point”

To determine the decision on a turn, iterate over the list until the conditions are met, attempt to satisfying the outcomes by solving the possible target combinations of the ability, being unable to satisfy the outcomes counts as a failed condition, move on to next state.

I think the important thing to note about both approaches, one being dynamic ability selection from a personality solver, the other being an explicit rules-based approach, is that both still require a “Fill in the target requests from this ability programmatically” and ultimately that is the actually hard part!

So, regardless of how we got to the answer of “this is the ability we want to use” we still have to optimize the target request decision-making.

Making an ability solvable

Rather than worrying about how to solve all the possible combinations of target decisions, we can just do what A* does, which is adding a heuristic.

If we think back to solving a serious of target requests, the difficult cases are always with the tile-pick choices. On a graph of possible solutions to traverse, a directional choice has up to 4 neighbor nodes, but a tile pick could be any tile within a range and can scale up very quickly. Picking a tile within 5 tiles of the user has 60 neighbors!

Similar to pathfinding on a 2D grid, if all target requests were directional there would be far less concern about combinatorial explosion. And even if my computing power were infinite I would still have a problem where I can tile-pick until a range condition is satisfied, but I can't solve for the best tile-pick (such as closest or furthest).

One solution to this could be actually as simple as tagging the target requests in the ability asset! I could very easily tag a tile-pick target request token with “Prefer closest to/furthest from enemy/ally/choke-point/wall” and tag directional requests with “away from/toward” similarly. This way I could encode the intent of a target request into the ability definition.

Onto the solver, for tile pick requests, I can now sort the potential tiles by how close they are to the tagged reference. I can even choose the closest tile as the new root and only allow up to 4 neighbors from that root. Essentially, by declaring the intent of the tile-pick, I can tightly-constrain the considered neighbors and get it closer to the requirement of directional requests.

In Conclusion

At some point I need to sit down and actually get coding! But I feel a little less lost with this outline and some of my chief concerns are addressed. It's worth a shot now :host-nervous:

Thanks for reading I hope this was interesting! Feedback and criticism are all welcome! :host-love:

#gamedev #chron4 #longpost

| 🌐 | 🙋‍ | @britown@blog.brianna.town

Chronicles IV: Ebonheim attempts to mix the “instant-turn-based” combat of a classic roguelike (what do we call that? a dungeon crawler? a Nethack-like?) with more complex targeted abilities like a traditional grid-based tactics RPG such as Disgaea and also have perfect-ish deterministic turn information like Into the Breach.


Defining the Problems

Into the Breach's Prediction Model

ItB is really smart and innovative because it doesn't only show what the AI units intend to do on their turn, it also shows the consequences of the player's decision prior to committing to it. This would normally be a really complex thing to get right but not for ItB because the prediction problem is heavily simplified by a very key design decision: the board can't change between decision and execution.

Since all enemy units act together on their turn stage, they can never interrupt a player's plan. The player can decide what a unit will do, calculate it's outcome by only simulating one action (based on the current board which is guaranteed to be accurate), and then instantly execute it, modifying the board.

There's no turn-order in the classic sense that requires a player decision to be delayed and executed at a later point when the board may have been modified by other units acting first.

Chronicles' Design Constraints

Chronicles wants to have this nice feature of showing the intended decisions of AI as well as the consequences of player decision prior to committing. However, there are several design constraints for the game that make this a lot harder to predict and message:

  • On every turn, every actor acts in a calculated turn-order.
  • When deciding an action, for both player and AI, the decision should be based on the predicted game-state at that point in the turn-order, taking into consideration previous actors' intended decisions.
  • Abilities must be able to have complex, multi-stage, multi-tile, dependent targeting options.
  • For each stage of targeting, the game should render the outcome of making that decision by simulating the full turn.
  • All turn decisions must still be able to function relative to the actual game state at time of execution, even if it doesn't match the predicted state at time of decision.

I also wrote out a long list of abilities I already intend to implement. Some of those are a surprise but many of the basics are:

  • Pushes & Pulls that knock a target actor in a direction, potentially causing damage
  • Redirects which modify the already-decided direction of an actor's target decision
  • Teleports which zip-zap actors around the board
  • Stuns that interrupt an actor's decision before they are able to execute

Finally, let's also throw in some architecture design constraints that will make the final system scalable into the future:

  • Actions need to be a Game Asset. This carries the same constraints all Chronicles Assets have which is that they must be editable in-engine and edits must affect the running game instance live.
  • Actions need to be able to have features stapled onto them ad infinium without increasing the overall complexity of the architecture.
  • Adding new action functionality should never have to modify the prediction engine.
  • Actions should be able to, in the future, support looping, branching, and more complex flow control.

The Solution

Breaking it Down

To try and build a unified theory of Actions that satisfied all constraints it was helpful to start by separating concerns; first by user. Who are the consumers of this system? 1. The Developer: Writes C++ and adds new features to the system like pushes and branching and looping. Cares most about the ease of adding and tweaking types of actions with minimum boiler-plate. Doesn't want to ever have to touch the prediction code ever again. 2. The Content Creator: Uses the in-engine editor to define arbitrary actions for use in all the different abilities. Likes having tons of knobs and clickers to tweak everything and make unique abilities. Expects everything to have tactile UI and doesn't want to write scripts. 3. The Game Renderer: Needs to be able to inspect a given action against an immutable game state and then draw both decision-time and execution-time UI and messaging. Really doesn't want to have to care about the underlying assets, enjoys being real stupid and just simulating component parts to draw. 4. The Game Logic Step: Needs to be able to modify itself based on the actions. Also doesn't want to care about the underlying asset, just wants to loop through the actions and call Do(). Likes long walks on the beach and trivial copyability.

I realized that satisfying the Content Creator's stories is a fairly isolated set of problems. Most of what makes their life easiest doesn't need to touch the other users. What they need is Game Assets that serialize and deserialize, have UI, and are able to be referenced in immediate-mode from the asset directory. So this is where I split the problem into two distinct parts: Actions and ActionTokens.

Actions

An Action is a pure-virtual interface (C-style vtable in my implementation but still). Because this is a Game Asset, these are completely const and immutable during gameplay. This interface has the functions create(), destroy(), serialize(), deserialize(), and, most importantly, doUI() and compile().

doUI() uses ImGui's immediate-mode idiom for rendering out a complete frame of UI for modifying the content of that Action. A lot of my UI is also driven by code-generated reflection so it's extremely easy to throw together some rapid UI for modifying a new type of action.

compile() spits out a set of ActionTokens, which are consumed by the other users.

What makes the Action Interface really slick is that you can make an Action that is, itself, a list of Actions. The ActionList implementation just holds onto a list of child Actions and calls the virtual functions on each of its children. This makes all Actions reusable, modular, and embeddable!

Right now there are 3 action implementations: DeclareTarget, MoveAttack, and ActionList. In the future, ActionList can be expanded to have options for looping and conditional branching, and of course new Actions are easy to add like pushes, teleports, or AoE damage.

With the Game Asset side of the problem completely fleshed out and implemented, the content-creator is happy and the Developer is happy because all the UI code and boiler plate for ser/deser is contained in a separate module that never touches game state.

ActionTokens

static void _actionList_Compile(Action const* self, ActionTokenSet &tokens) {
   auto data = (ActionDataActionList*)self->data;

   auto tok = tokens.tokenGen.alloc();
   tokens.tokenType[tok] = ActionTokenType_BeginScope;
   tokens.tokens.push_back(tok);

   for (auto&& a : data->actions) {
      actionCompile(a, tokens);
   }

   tok = tokens.tokenGen.alloc();
   tokens.tokenType[tok] = ActionTokenType_EndScope;
   tokens.tokens.push_back(tok);
}

For how we ended up at ActionTokens, let's talk a little bit about what a program is versus what a programming language is. When you write a program in C, you have all sorts of bog-standard utilities like looping, conditions, scoped variable declarations, stacks, heaps, memory referencing, etc. You can think of a program as a series of expressions. Every expression has different behavior and different sets of inputs but ultimately the written pre-compiled program is one big expression that contains expressions that contain expressions all the way down.

When you compile, these expressions are translated. Loops and Branches become GOTOs/JUMPs, memory value referencing turns into a whole ton of MOVEs and PUSHes, and you end up with a machine-readable completely linear list of instructions. Your PC doesn't need to know anything about C, as long as the instruction set is compatible with the CPU.

You might be catching on that this is a great metaphor for Actions! If Actions are the expressions of a program, ActionTokens are the instructions! Before our Actions-as-defined-by-our-Game-Assets can be used by the other two users, Step and Render, we have to compile it to a linear list of instructions.

Data-Flow

One problem I ran into immediately is that while we have a token for declaring a target-request with a specific ID, it's actually really tricky to determine what targets are available at different points in the execution. Similarly it is difficult to get the decisions for each request stored in an appropriate place so that the tokens that reference the decisions are able to resolve their targets. Here's an overly-complicated situation you could run into with embeddable Action Lists:

Begin Scope
   Declare Target "T1"
   Declare Target "T2" relative to "T1"
   Move/Attack "T2"
   Begin Scope
      Declare Target "T1"
      Move/Attack "T2"
      Declare Target "T3"
   End Scope
   Move/Attack "T3"
End Scope

While looping over our tokens prompting the user for decisions on all of the target requests, you need these name-resolutions to work the way they would with scoped variables in a normal programming language!

And so in addition to the const, immutable TokenSet we got from the compiled Action, we now also need a very mutable TokenMemory for keeping track of all of this! With TokenMemory we can iterate through our token set and define memory addresses to all of these target references so we don't need to worry about scoped name resolution anymore.

Begin Scope
   Declare Target "T1" -> 0x00: New Target
   Declare Target "T2" relative to "T1" -> 0x01: New Target referencing 0x00
   Move/Attack "T2" -> referencing 0x01 relative to 0x00
   Begin Scope
      Declare Target "T1" -> 0x02: New Target
      Move/Attack "T2" -> referencing 0x01 relative to 0x00
      Declare Target "T3" -> 0x03: New Target
   End Scope
   Move/Attack "T3" -> Error, unresolved name in scope
End Scope

With the links made between all the tokens in the memory object, we can get decisions from the player or AI, and assign them to the appropriate place to be referenced by the other tokens.

Consuming a TokenSet

Now we have everything we need for our token set to actually affect things, how do we actually use them? If we think back to our list of design constraints, a TokenSet is only the things performed by a single actor. That actor might be going 3rd in a turn-ordered list of 7 actors. That actor might be the player-character which means all the other actors have already decided what they will do. Therefore, your target-highlighting needs to convey to you the consequences of that particular target decision at the point in the turn order that the player acts. How do we do that?

This brings us all the way back to functional programming, inline state modification, immutability, and, chiefly of all, trivially-copyable Game State.

When I first started trying to tackle this system months ago I very naively attempted to make little snapshots of the board state that I could pass around to different places to make decisions relative to the expected board. This runs into a lot of issues as you start to scale up in complexity and the amount that you need to be correct and in-sync in these little snapshot objects starts to grow into a medusa.

So hey, what if you just copied your literal entire game state and then applied every token to it in-order until you reach your targeted simulation point.

And that's exactly what we do:

void _renderActorDecisionsUnderActors(EGATexture& target, GameState const& g) {
   auto gPrev = g; // copy game state to preview game state
 
   // loop over all turn members in turn-order
   for (auto actor : g.turn.members) {
      // at this point actor should have the full list compiled AND memory should be filled with decisions (except for the player)
      auto &actorState = *g.turn.actors.find(actor);
 
      size_t idx = 0;
      auto &set = actorState.tokens;
      auto &mem = actorState.tokenMemory;
 
      while (idx < set.tokens.size()) {
         // if we reach the player in the turn-order and they're still deciding this frame, skip them
         if (actor == gPrev.player_controlled && turnMemberTokensNeedDecisions(g, actor) && idx == actorState.tokenIndex) {
            // we're at the player's current decision node, break
            break; // go on to next actor
         }
         // render the token messaging (show arrows, highlight squares, show damage)
         _renderActionTokenUnder(target, gPrev, actor, set.tokens[idx], set, mem);
 
         // apply the token to the PREVIEW game state
         gameStateApplyActionTokenForPreview(gPrev, actor, set.tokens[idx], set, mem);
         ++idx;
      }
   }
}

We can do this every frame for our Render user who never needs to modify the actual source GameState!

This method of simulation is so intensely simple and intuitive and it solves all simulation problems. Need to cancel target request 2-of-3 after noticing that request 1-of-3 was too short? re-simulate. Need to use an ability that has Turn Priority and makes the player suddenly go first? re-simulate.

In the end, both the Render user and the Logic-Step user are the same! They're just interpreters of the compiled token set. That's really all there is to it!

Out of everything we've gone through and decided up to this point, the beauty is that the turn prediction engine is actually the simplest and smallest section of code out of them all.

Closing Thoughts

Why didn't you use LUA for your action scripts?

I want to use LUA! For... some things. The truth is that scripts are data is logic is data. I could pretty easily replace the Game Asset Action with a LUA script and let people go crazy with loops and branching and everything. But then I lose all this cool UI! ImGui lets me do scripting without actually writing scripts and that is COOL.

Preview vs Execution

Because we're making a roguelike and perfect knowledge would ruin the fun, we have two different execution functions for the different ActionTokens. ApplyForPreview will do things that may modify the board state but provide imperfect information. ApplyForExecution is when we actually perform all the actions and loop through all actors and modify the base game state with all the results. The difference between these two is going to be a bit of a grey area and will require a lot of iteration! All I know is sometimes you need to see an ogre and for it to say “Doing ???? to you.” and for you to need to change your pants.

A Very Small Example

After I got the system working last night, I rigged up a version of the move attack that asks for you to decide on three consecutive target-requests, each one relative to the previous. The Move-Attack at the end would then act upon the final selected tile. The following gifs were made with zero code changes only modifying the Action assets:

You made it all the way to the end!

:eggbug: Good Job!

#gamedev #chron4 #longpost

| 🌐 | 🙋‍ | @britown@blog.brianna.town

I posted some gifs of my combat system mostly working which I'm proud of but I want to go ahead and do a little write-up on the mechanics as-designed and talk about plans for the future sooo hit the jump if you're into that.


Philosophy

I really want a combat system that focuses most of its complexity on strategic positioning rather than on numbers and calculations. It's one direction to take Nethack and focus on itemization and +10% bonuses and heavily numbers-driven character advancement but I wanted a deterministic rules-based system that is simple to pick up.

I was heavily inspired of course by Into the Breach with its chess-like puzzle boards but I also really appreciate a similar ethos from Slay the Spire where these games attempt to provide the player with as much information as possible about the outcome of their choices while still being challenging.

Rules of Combat

The sum-total set of rules that govern the above gif are as-follows:

  1. Actors have health (red pips) and stamina (green pips)
  2. When an actor receives damage, stamina is removed before health
  3. When an actor attacks, it also costs stamina
  4. If an actor loses all stamina before acting, their attack will be cancelled
  5. Stamina spent on attacking will recover on the actor's next turn (right before they act)
  6. Stamina lost from damage will recover 2 turns after it is lost (right before the actor acts)
  7. If an actor is being targeted by an attack and moves away before the attack executes, they will take an automatic 1 damage (“dodge cost”)
  8. Actors have speed which determine turn-execution-order (roman numerals)
  9. The player has turn-order priority within their speed category

In the above gif I slowed the execution way down to see all of the parts. White pips are stamina that will recover before the next action. The player dispatches a low-stamina enemy that is faster (moves first) and then suffers dodge cost to reposition around the enemies. They then uses player-advantage to cancel the enemy hits and take them out one at a time before they can act.

EGA UI

My tiles are 14x14 and the entire frame buffer shares a single 16-color palette from the 64-color EGA color space. I don't have arbitrary scaling or rotation. Everything has to fit under these restrictions including UI elements. This proves very challenging when you're making an RPG!

A large part of breaking this down and reducing complexity is to define meaning to specific palette slots. For instance Palette Index #0 is always “Black” AKA the “Background Element Color” and the “Border Color” When creating art for map tiles, UI elements, actor sprites, I always use index 0 for this purpose. That way if I make a new palette, I can change #0 to a non-black and expect a fairly uniform distribution of effect on all of the art.

Next just comes down to the size of UI elements. Displaying health and stamina is shown on the tiles as pips which are, at minimum, 2x4 pixels. So literally I can't have more health and stamina in one row at the bottom of the tile than the tile can hold (7). I have a system to show multiple rows but a lot of these restrictions actually feed back into the simple design of the combat in the first place. I don't expect to have more than 30 combined health/stamina on a single actor just like I don't expect more than 16 actors to be in the turn order (because that's the largest roman numeral I can fit on a tile)

Other UI concepts that go into consideration are things like the pants color of a sprite happening to line up with the pips in such a way that it appears they have more stamina than they do. Another fun one was reducing black border pixels around elements to allow more of an actor sprite to bleed through from below.

Improving AI

My A* solves are fine enough for these gifs but you may notice a recurring concept in that enemies will often arbitrarily block each other trying to get adjacent to the player. This usually blocks their allies who have to go around, allowing the player to exploit the position and pick the enemies off one at a time. This sort of thing happens in Nethack and other dungeon crawlers where the common strategy is to always just back up to a hallway and force enemies into a bottleneck. I don't really want this strategy to always be universally viable.

A big part of the improvement is going to be mapping the possible decisions onto a weighted graph and solving with Dijkstra's. I want allied enemies to give multi-enemy-to-player-adjacency a higher weight. In the future I will also want enemies to block doorways and be smarter about getting into range of attack. The Player Advantage is a really big one because enemies can't react to player action on the turn that it happens. The counter-balance to me is better strategic positioning behavior.

The end-result I hope is a system where 2-on-1 fights are fairly unwinnable with basic combat. Overcoming difficult encounters will need to rely heavily on the upcoming abilities system.

Future Plans

Next up is more complicated attack patterns. I want to implement ranged attacks that don't require dodge-cost to avoid and play with that. My original spec also included a 3rd pip type, armor, which sits between health and stamina and is purely there to mitigate a new stun mechanic where certain attacks can cause action interrupt.

All of this is also building toward the really good stuff which are arbitrary cooldown abilities. I want abilities to make or break encounters because which abilities you have informs your character's build. Abilities will revolve around concepts such as pushing other actors around, repositioning, teleporting, redirecting enemy actions, sneaking / visibility, and bypassing the existing stamina rules.

Once you start thinking of things like side-pushing an enemy so they kill an adjacent enemy on your turn before that enemy attacks, the rest all starts to spill out, it's very exciting!

First Playable

I've reduced the full scope of the larger game project into just being a combat demo. This will be a standalone game that will be a set of a dozen or so pre-made encounters where the player will need to solve the encounter with their weapon and abilities and not die to continue onto the next. I hope with this project that I'll have a very well-fleshed-out combat system before I start moving onto the exploration and roguelike loop systems.

You Made it!

:eggbug: Good Job :eggbug:

:host-love: Thanks for reading :host-love:

#gamedev #chron4 #longpost

| 🌐 | 🙋‍ | @britown@blog.brianna.town