This week was a good week. As well as making some good progress in the code, I also had some time to read up on a couple of things, as well as watch some short youtube clips from Extra Credits. They suggest writing a weekly update on what you plan to achieve and match it against what you did achieve. That way you can a) stay on track and b) measure your progress and c) discover why engineering deadlines often blow out to 50% of their initial estimate.
Well I already knew about part c, I already estimate double the time I think it will take from my experience in my first real IT job, but part a and b are interesting so I’m going to give it a shot. To that end, I’m not going to post ‘when I get to a certain point’ and instead try to post weekly. I prefer to post images or movie clips, this means I will now probably be posting mostly text – at least for now. So ‘what I did’ this week – I finished animating the wheel. The wheel was part of the Mk2 Rocket, the first successful steam train. You can see the tutorial for yourself here.
The ultimate goal of my Blender learning experience was to be able to create models, unwrap their textures and mesh, and import them to my software. The final tutorial was supposed to cover UV unwrapping (unwrapping the texture) however, to my surprise, this tutorial was missing, so I didn’t complete it. The rest of the train model was available to build, but it is excessively (lavishly!) decorated and appeared to be a significant time investment. I decided to skip it for now, and get back to the project.
My main goal at the moment is to get the game engine up to a prototyping stage. I will put the unwrap/import models aside for the moment and just make do with spheres and blocks. The basic game I am aiming for is one where you build your spaceship out of different pre-determined blocks, and then throw them at somebody else’s spaceship and see which one is best. So I want to test out this ‘idea’ and see if it is actually fun, or not, and then either advance it or do something else.
So this week: The goal was to create a sky box, basically, a background of stars that would appear to be many light years away, but was actually just in front of the camera. I started by creating a new sphere object and replacing the texture with my star background.
Next, I created an inside-out-earth. Essentially, I flipped the normals and switched the top/bottom, left/right, front/back sides of the cube-sphere.
They seem to be of different sizes and are overlapping. The middle earth is the original one, and shows the correct lighting. The top right earth is ‘inside out’, the left and right side is closer to you than the center, as suggested by the light not showing up clearly on the right, like the middle earth has.
The spheres are overlapping because I want to test out the z-depth and have a clear earth in front of the star background. To make this photo look better I tried to move them side by side.
But…it didn’t work. Try as I might, I could not get the objects to move apart without amending their starting position.
The rest of the week was spent debugging this issue. I, as yet, still do not have a satisfactory result, but that’s okay, because I know what to do now.
There are actually several issues that need to be dealt with.
1 – Dealing with multiple models in the same scene. This is a complicated 3D math thing so bear with me: Each model has it’s own internal XYZ coordinate space, and the ‘camera’ has it’s own XYZ coordinate space. The model’s xyz coordinate space doesn’t change, no matter if we pick it up and move it around. However, the model ‘lives’ in the camera’s coordinate space, so when we move the model, the ‘xyz coordinate of the model from the camera’s perspective’ will change.
So what we need to retrieve to send to your monitor is ‘how the model will look from the camera perspective’ and then we need to convert that 3D mess of vectors and texture coordinates and translate them down into a 2D image that will fit on your computer monitor. We call this a projection (think of a projector, that throws an image on a wall. If you have something in the way of that projection, like a light switch or a cupboard, the image will still appear perfect if you are viewing from the position of the projector. But if you move the side, shadows appear).
All up there are three ‘coordinate spaces’ we need to care about. The model space, the camera space, and the projection. To cut a long story short: I spent time reading up on these spaces. It seems I have a problem with the model/camera space and I need to work out how to deal with this correctly. This is the issue I mean to address first.
2 – Dealing with multiple shaders. I discovered that whatever texture I load and use last gets painted on everything. That’s because I only have one shader and it’s being used for everything, and the shader only has one paintbrush colour. I therefore need to modify the shader in between drawing a model (impractical as this requires a run-time compilation per-model, per-frame) or provide unique shaders for each model (more practical as it simply requires a load-program operation).
The solution is to provide unique shaders for each model.
There are two obvious ways to do this, also.
A – Shaders are created individually along with models, and each shader has a unique ID. Models are given a shader ID when they are created that matches an existing Shader. The model provides its ID to the shader list to retrieve the correct shader. The program then draws the model with the correct shader.
B – Each model contains its own shader information. The shader program is created when the model is created, and then when it’s time to draw, the model provides it’s own shader information. The model then draws itself.
Both approaches are viable and at this stage I don’t know which is hands down the best. It’s a question of object-oriented programming versus data encapsulation.
The first method uses an object oriented programming idealogy; because shaders are not bound to a model, multiple models can use the same shader, and I can easily change shaders that a model uses.
The second method uses data encapsulation; because shaders are bound to models, and the models know how to draw themselves, it is easier to deal with. However, if many objects use the same shader, it’s inefficient.
The answer to this question is not particularly important for a prototype so I may just code whichever looks easier to implement. In a game like Space Invaders, for example, there are many alien ships that all appear identical except for colour. I’d want to use the first method for Space Invaders. If all the little alien ships were unique, though, then I’d want to use the second method.
Edit: Both of these methods can be improved.
Method 1: Instead of using an ID and having the model search for its shader, the ID is actually the address/reference inside the container where the shader program is. This eliminates a search, however requires effort when creating the container to ensure keys are not lost or overwritten.
Method 2: I can use object inheritance when models want to use the same shader, thus eliminating the main drawback.
This therefore makes method 2 the clear winner and I’ll be pursuing this. I will be treating a shader program as if it is a variable of a model, like its orientation, location, and so on.
Goals for next week:
1 – Solve model/camera space issue (can move skybox-sphere and earth-sphere independently of each other at run time).
2 – Solve multiple shader issue (can draw skybox-sphere and earth-sphere with their own textures).
3 – Implement z-depth filter on skybox-sphere (sky-box sphere always appears behind earth-sphere even if skybox-sphere is closer).
4 – Implement skybox (relocate skybox-sphere to camera position, skybox stays with camera but does not rotate).