I’m at a stage with Black Annex at the moment where most of the development time is spent creating content, rather than working on the technical side of things. The features are nailed down (although some still not implemented) and most of the technical hurdles are overcome (except optimisation, there’s always room for more optimisations) so I don’t really get to just play around with code all that much in Black Annex right at this time.
So, one week ago I opened a blank new project and started creating something. This blog is always about the more technical side of what I’m doing, so here’s some video of something totally new.
Now that the campaign editor can create real content for the game engine, I have just been churning out graphics and throwing little test campaigns together to see how fast they run and how long they take to load. At the moment loading is slow with what you see here taking about ten seconds to load into the game. I’m not sure how this will improve over time, but it’s good to be able to finally benchmark real content in the engine rather than just hard-coded test data.
I’m finally getting a chance to throw huge arrays of sprites into the game and seeing it render them at respectable speed.
I’ve spent the whole afternoon working on nothing but AI. The work being done is on the way that the office “furniture” behaves. At the moment, the player is able to send the PC desks in the office haywire, so a nearby staff member will rush over to attend to them. After attending to them for a certain amount of time, they are considered “Fixed” and the staff person goes back to their patrol.
To do this, the following happens: The player finds a server rack, which is connected to two computers in the office. Clicking the server rack causes the rack to make the decision “Fire My Triggers!”, it finds the two computers it’s connected to, and tells them to consider making the decision “Attention Ornament!”. If all goes well, the computers make this decision, which causes a wave of sound to pour out of them for a certain radius. Anything hit by this radius will make the decision “Attend to Ornament!”. When one of the ornaments has captured someone’s attention, it goes into “Waiting for Cooldown!” decision. It will sit there until its captured staff member walks up to it. When they touch, they both enter “Cooling Down Ornament!” decision and a timer counts down while he works on it. Once the counter hits zero, the ornament decides to “Return to Idle Ornament”, and the staff member decides to “Consider next Patrol Path node”.
When in “Waiting for Cooldown!”, sometimes the staff member never makes it to the computer. He might get stuck or die on the way. While waiting for cooldown, the ornament’s “Impatience” raises slowly. If it hits a certain level, it will return to “Attention Ornament!” and capture the attention of someone else.
After lots of trials in an office with eight computers and only five staff members, every computer always got fixed eventually. The office is starting to really feel alive, watching staff members hurry around fixing problems and reacting intelligently.
Besides cleaning up a lot of functionality in the code, the last few days have mostly been spend on the design side of things so there’s not a whole lot to talk about at this point. I’ve spruced up the rendering routines to go along with all the new graphics I’ve been working on. Creatures now have a much better method of animating their deaths, and dropping lots of entities into the game with minor/major design differences (i.e. gender, clothing types, etc.) actually works in a demonstrable manner now.
The “Emote” popup over creatures heads now has different little icons that can appear in it for “?” and “!”. At the moment, the AI loop calls different ones based on what is happening to the entity.
In the meantime, no screeshot of the new graphics until they take a bit of leap in fidelity. Most of the prototype graphics have been dropped out now and a few of the alpha ones are in so far, but I’m still using prototype assets for the screenshots.
A lot of work has been spent recently on the animation and graphics, which I can’t show right now, but I have just today finished the very basic framework for the user interface. It’s a fairly open design where you just declare basic things about a “Window” and what “Buttons” live on it, and the game loop will always check on mouse clicks to see if the user clicked inside a window, and if they did, it calls the function set for the button inside the window that they clicked on.
At the moment, there’s a window called “Asset” manager, upon which the buttons are renderings of all the controllable creatures the player can use. If you click on one of the “buttons”, the function for that button will cause the camera to zoom to that creature, and the player will take control of it. The way it’s designed should make it fairly easy to build a fast user interface which is quick to capture all the input needed to react.
I’ve also done some work on the internal font set, it has an interlaced look to it now which can be adjusted on a sliding scale. There’s also now the ability to click and drag out a square around creatures to take control of multiple ones at once.
Entities can now be given lists of co-ordinates that make up “Patrol Paths”, which each location accompanied by a “Pause Time”. When a creature is walking around on patrol, they’ll go from node to node, pausing at each one for the amount of time specified. If something more interesting happens while they’re patrolling, they’ll go attend to it before returning to the next patrol node they have in their list.
To get this behaving right in as many situations as possible, I’ve had to work a lot on the main AI loop and the concept of “Interest” in a decision. Whenever a creature decides to do something, they also are given a level of “Interest” in that thing, which constantly goes down the longer they keep doing it until they get sick of it. Creating a balance between which decisions should override others, and what to do when there’s nothing of interest left to do has been a lengthy process, but it’s starting to take a nice shape now.
I’ve had to make a few improvements to the way Pathfinding works as well. Because Pathfinding is a very memory-intensive process, I only allow 15 creatures to use it at any one time. The problem with this was that if a creature attempted to pathfind to an impossible to reach location, they would take up one of the pathfinding slots forever. I’ve changed the rules a little bit so every creature gets five chances to take a single step using path-finding. If they can’t make a move, they release their pathfinding slot for someone else to use and decide that they’ve come as far as they can.
I created a new type of ornament today called a Trigger. They’re just static things that sit around, but if you “Interact” with them (Currently by having any player-controlled creature stand in front of it and clicking on it) it will cause all other ornaments that are listed as in the ThisTriggersThat list to go into “Attention!” mode.
I’ve made these very bland looking “server rack” ornaments; if you walk a creature up to them and click on it, a set of nearby computers will start flashing for attention and some other creatures will walk up to them to play with them. It’s the beginning of some very basic “gameplay” mechanic starting to rear its head. All of this work is still self contained inside the main AI loop so it’s making it very easy to expand on existing creature behaviour and allow them to make intelligent decisions about whether they should go play with an ornament that’s demanding attention, or maybe run away from something that’s shooting at them.
All the Creature and Ornament placement and statistics are now controlled by a single function call once for each creature instead of a big ugly bunch of declarations at runtime as it was previously. It’s becoming much cleaner to load in levels and fill them with things to interact with.
This afternoon was spent completely rewriting the main AI loop into what I hope will be the final layout. All decisions are now made in a boolean function called TryNewDecision where you specify things such as the creature being influenced, who is causing the influence, and how exciting the influence is.
With the entire decision making process now self-contained instead of being spread haphazardly throughout the game loop, I can now fire off decision based actions much more precisely and easily. To demonstrate this, I wrote some new hooks into the decision making process. I added a new little “Emote!” sprite to the game so creatures can have speech bubbles above their heads, and I’ve made it so when one creature can see another creature within its own cone of vision which is not one of its team-mates, it will show a little emote over its head to show that it’s looking at something interesting. Another hook I’ve added is now, when you fire off a weapon and alert an enemy that will fight back at you, the camera quickly zooms in on the enemy you attracted to show that you’ve got its attention.
With the AI code much cleaner now, I look forward to moving ahead a lot more smoothly with the design flow for creature and ornament behaviour.
I’ve added a new routine that calculates a creature’s cone of vision based on the distance they can see and how wide the cone should be. For debugging, at the moment it doesn’t do much of anything other than draw a graphical representation of a cone in front of their face, but it’s nice to see the way it collides with walls correctly and follows the creatures around as they change directions and move about.
At the moment, calculating the cone of vision for more than ten creatures at a time causes a few frames to drop, so I’m going to have to speed things up a bit, as well as be a bit more intelligent about what creatures actually need to have a cone of vision calculated at any given time.
Update: Okay, I actually spent some decent time optimising the cone of vision logic when I realized I really can’t be calculating it every frame, and it needs to be only recalculated when a creature moves, but rendered visibly every frame. I also needed to store the cone of vision in a manageable array so I can quickly check who can see who (Bool CanThisSeeThat(This, That)). I got this work done, and realized I was massively over-rendering the cone of vision graphics before. It’s now much faster and much more intelligent, meaning no more dropped frames and it’s all kept in a nice little loop called “ManageVision”. Creatures have a boolean called “HasEyes” to decide whether they need to have vision management, so the ornaments don’t get cones of vision rendered near them.
I like the attached screenshot here as you can see four creatures distracted by computers, as well as a cone of vision colliding with a wall, demonstrating how it can not penetrate the clipping plane.
It’s Anzac day, so time is well spent getting some artificial intelligence routines beefed up. After creating the “pathfinding” routine a few days back, I quickly disabled it because it was very unpleasant having your creatures always take the most direct route to a destination as soon as they were sent there. You would click on a tile to send a creature there, and the creature would instantly know that the direct path was not possible and turn around in the other direction to take the better path. It worked well, but it just wasn’t very pleasant on a tactile level.
Back when this was the case, there were two methods of travel: “Beeline” and “Pathfinding”, basically for dumb vs smart movement. Now I’ve created a single method where all creatures will always use Beeline until they bump into something, at which point they will calculate a pathfinding result and switch to that mode. The result is your creatures will try to walk directly to your elected destination, and if they crash into a wall they will follow it to a better path.
The pathfinding doesn’t take into account other creatures along the way during the calculation, assuming that by the time it gets to that part of the path, the creature will probably have moved anyway. Because of this, if a pathfind causes a creature to bump into another creature, they will currently both just stop in their place.
Aside from this development in creature movement, I’ve also now completely freed the “Currently Controlled” creature from any hard code, so you can control any creature in the game easily, and also control as many as you like at once. There is no technical difference between an AI creature and a human controlled one, just a few arbitrary things like what “Team” they play for.
One other new feature I got working today is that “Attention!” for ornaments now calls the nearest creature to come play with it. With some rudimentary “Compass” math, a creature will stand in-front of an ornament that is demanding attention and face the correct direction to play with it. They don’t actually do anything yet, currently a computer screen will start to flash green and black and a creature will hurry over to stare at it.