This week has been quite the busy networking week. I do mean ‘networking’ with people, not ‘networking’ with a LAN cable, which might be the first assumption given the nature of this blog!
I didn’t get a submission in to the Parliamentary Inquiry – fortunately, I’ve been given an extension until the end of September, so I’ll be able to make it a better, more professional looking document. That said, there are several quality submissions in place now, and I wonder if my voice is needed. Still, I might as well finish it, eh?
As a part of this, I met with several people from the video game industry, and chatted with some of the people from Brisbane/Sydney IDGA branches, and also what we hope will become the SA branch. I also finally got to get back to ARGGGH although it seems the cold I picked up from the job interview I did last week knocked me out from then on.
This week has, aside from talking to people, largely been driving places and finishing up the move into this house. I did get to spend a tiny amount of time working on my own game in Unity, although the main focus was on studying up on the game programming patterns. I finished the book. It was very insightful. There are four patterns that I want to make use of, right now, and that will be the main focus of my coding for the next few weeks. These are changes to the software architecture and there will be no graphical changes – instead, the frames per second will increase, and the ease of developing will improve.
Coincidentally, they all relate to the area I am working on right now, physics!
So those patterns are Double Buffering, Space Partitioning, Components, and Data Locator. I also read up on the Singleton pattern from the context of games, and decided that if the object doesn’t hold any of it’s own data, then it doesn’t need to be instantiated as an object at all. So I was able to purge some of the SoundHandler/PhysicsHandler/***Handler type classes. I will be putting these all behind a namespace when I’m finished so that I can refer to them with (somename)::somefunction because I don’t like looking at empty function calls.
I also decided that a single weekly update isn’t for me. I’d much rather update this blog as I do something because it feels more satisfying, plus it’s motivating for me to look back and see “Wow, I did all this”.
So first up, double buffering. You may be familiar with double buffering from graphics, after all, that’s where I heard about (and currently use it) first. OpenGL’s double buffering is safely hidden away inside the API, and it’s accomplished with a simple swap() command at the end of rendering. So I was interested in reading up on Robert Nystrom’s Double Buffering for physics.
Robert’s own page has an excellent explanation with pictures, but here’s my quick story:
Let’s say you have three balls in a line – red, yellow, and green. Red is on the left, green is on the right, and yellow is in the middle. If you pushed red and green towards yellow at the same speed, and assuming they both have the same mass, you’d expect that the yellow ball probably wouldn’t move much at all.
In a physics engine, I have to update the ball’s position. The way I do that is I first identify a collision has occurred, and then I resolve that collision. So let’s say we have
checkForCollision(red, yellow); checkForCollision(green, yellow);
At some point, with the red and green balls moving towards the yellow ball at the same speed, they’ll both collide at the same time, and so checkForCollision(red, yellow) will become true. Since it’s the first one the computer sees, it resolves that one first.
With a simple physics collision, we apply the first object’s Force to the second object. So we first resolve the (red, yellow) hit, and say, move the ball five units right. OK. Now it’s time to do checkForCollision(yellow, green) – but the yellow ball has been moved five units right. It has a new Force from the red ball, and it’s in a different position.
Perhaps the yellow ball doesn’t collide with the green ball now, or maybe it does and the collision is different, but whatever happens it doesn’t matter – we wanted to resolve the collisions simultaneously, and now we didn’t.
The solution is to store the collisions as they happen, but not resolve their effects until all collision steps are resolved.
This type of solution is called Double Buffering, and that’s what I want to implement next.
Presently I resolve collisions by gathering the ‘net acting forces’ on an actor, and after all collisions are resolved, I implement the result. This is correct, but quite messy, and has to frequently check where the actor was last step, where it wants to be, what the current movement vector is, etc etc. It’s not intuitive and it’s hard to modify, which is part of the reason the coding is going so slow. I like Robert’s suggestion here, as well as several other of his patterns, so that’s what I’m going to focus on for now.
Saturday evening: I’ve implemented double buffering for the rendering of the objects (remember, we have to calculate the position and rotation of the objects in between physics steps), and now I’ll work on the physics components. I previously didn’t calculate the rotation of objects using interpolation; I’ve now implemented this using quaternion slerp. I’ve also updated the actor class to now exclusively use quaternions.
A little later at night, and the actor is now moving around (without collisions) correctly using the algorithm.
As well as this, I realised that ‘doPhysics’ should really be part of ‘doGameStep’, so I’ve tidied up the game loop a little bit. In the process, I’ve split the way that the system handles inputs.
One type of input (presently, camera moves) are resolved outside of the doGameStep, and occur along with rendering. This happens very quickly, so right now, the camera pans and zooms quite fast. However, the actor only resolves it’s input during a physics step (which is now part of Game Step), so it moves at a slower rate. This is desirable, as I don’t want a character’s movement to be impacted by how fast their CPU is running.
At a later date, I’ll probably move all of the input handler code in to the doGameStep code except for high level system messages, such as ‘call up the game menu’ or ‘force quit’.
The next step is to reactivate all of my collision surfaces.
After I’ve implemented this pattern completely, I’ll switch back to knock out a Unity tutorial/make a Unity game on my own. I want to alternate between Unity and C++ so that I can maintain steady progress in both.
Lastly, I was impressed with the way that the Unity tutorials explained out their scripting language. I felt inspired to create my own C++ Game Programming tutorials. I suppose I had better make some games soon so that I can do this.
Monday afternoon: I’ve completed the double buffering and made a start on the next design pattern, Component.
Previously, Actors knew a great deal about their current state, and their physics code was contained along with the actor.
I’ve now moved the physics code out of Actor and into it’s own physics header.
In the Component pattern, Actors don’t know anything about their current state directly. They ‘have’ components that describe the state of the actor, and the game engine selectively picks which components it needs for any given game state. So I might have a physics component, a drawing component, a logic component, a sound component, etc etc. If I store all the components from each actor by TYPE of component, then I can ultimately do away with the actor class completely and simply traverse a components list, with each component having an ID to say what bunch of other components it belongs to.
This is good, because it allows me to store in the computer sets of data that are likely to interact with each other in the same block of memory, allowing the CPU to find it in the cache, which is the third pattern I’ll work on.
For now, though:
I need to make some major changes to the engine to allow for proper physics.
In particular, scenes need to become actors. There’s two types of actors, now, kinematic and kinetic.
Kinetic actors are actors that follow usual rules of physics, in particular gravity, and will respond to other actors colliding with them. For example – the player character.
Kinematic actors are actors that are effectively inert and aren’t impacted by physics. However, they will still exert their own physics on actors that collide with them. For example – walls of a house.
Since I am using wether or not something is in/not in a scene as a determination for my collision detection, removing the scene class effectively switches off my collisions.
The final pattern I want to work on is space partitioning, which means to divide up the space in the game in some way so that objects can efficiently check for hits only against their neighbours, rather than every actor in the scene.
This will be a lot of work. The end result will be an engine that is ready for physics (and more advanced collision detection) to be written in. The system architecture will be quite elegant and able to incorporate future changes far more easily.
I’ve decided to leave the game engine at this stage for the time being, and focus on the other projects that need to be completed – the submission to the inquiry, and prototyping with Unity. I’ll continue the engine development once the inquiry is done, and a prototype (or three) are done.
I’ve spent some time on the inquiry and also Unity. I also went back to the game engine because I’m addicted to programming.
It took most of the afternoon and evening to complete this step, which was to fully port the physics code out of the actor class. I now have a RigidBody component on the actors. Having gone back to effectively ‘redo’ this code, I’ve cleaned it up significantly.
The next step is to re-implement actor-versus-actor collisions. Previously I’d hacked this together roughly to get it ready for the game jam. This time around, I’ll take what I’ve learned from actor-versus-scene collisions, plus what I’ve learned reading physics books recently, to do the job properly.