I repurposed our elevators as trampolines.
When the first 7 day FPS challenge was launched in 2012, I remember wishing I could participate but knowing full well that I didn’t have the skill to make anything of it. I’d never done any serious 3d programming and had no experience with the asset pipeline required for a 3d project. I decided I’d learn how to use a simple 3D engine in my free time over the following year and participate next time.
Well, it’s one year later, 7DFPS is back in business, and I still have no experience in any of the above areas. I had already resigned myself to missing out again, but I happened to stumble across this article on writing a “pseudo-3d” engine such as is used in Id Software’s early games, including Doom. I decided to give it a shot. Here’s my progress for the first two days.
The first step is figuring out where the walls are. The above screenshot shows that not happening.
A bit of an improvement.
Wall segments are being drawn, but in completely the wrong places.
Segments are now positioned correctly. I changed the texture to a standard pink/black checkerboard for debugging, but it isn’t showing up right. In addition, the perspective is “fisheyed”, causing curved lines at the periphery of the field of view.
The elusive “perfectly textured wall” in its native environment.
Some awesome modern art I created while trying to fix textures.
The final result as of this writing. At this point the engine is basically done and I can get to work on some gameplay.
I started work on a game called Gunbuilding about a week ago with the purpose of stress-testing #Punk, a C# port of Flashpunk that I’ve been developing. Nothing puts a framework through its paces like using it for a game jam, and if nothing else I ended up fixing a lot of bugs that would have inevitably bitten me later on. On the downside, though, my idea for the game changed drastically to fit inside my schedule, and I’m not thrilled with the way it came out.
If you want to play it, go here.
The first thing to go was the mechanic that gave the game its name. I created a system that assembled bullets by passing data through a set of components that could be swapped out at any time. In theory, this could have allowed for a crazy number of combinations, resulting in bullets that homed in on enemies and spawned others to ricochet around after impact. The system technically works as it is, but I didn’t have time to create more than one component for each category. I’d like to explore this type of system again in the future, though; it seemed like it had potential to be a lot of fun.
The next feature to be cut was the enemy AI. I’m pretty happy with the way my little guys hop around (the quadratic curve movement system they use was one of the first holdups I encountered), but they don’t do anything to avoid each other and always ended up clumping together into a group as they moved towards the player. As a solution (though at the time it was a joke) I made them explode when they touched each other, and then made that explosion chain to other nearby enemies. The chaining was super simple and easy to do thanks to #Punk’s extensive message broadcasting capabilities, and it turned out to be a lot more fun than the approach I had been using, so in one sense I’m glad I ran low on time to implement the rest of the game systems.
The last thing I didn’t have time for was to put any effort into graphics. Everything in the game is made up of colored squares, with the exception of a grass tile I used for the background and a tiny image for the particle effects. I generally try to prototype with as few art assets as possible to avoid getting stuck perfecting them before any gameplay is in place. In one sense that worked out this time — I can’t imagine what else I would have had to cut had I spent all kinds of time on art early on — but I feel like shipping the game in such an incomplete state is a real shame.
Overall I’m disappointed with the way the game turned out, but this was never about the game. I’m pleased with the number of bugs I was able to fix in the framework, and that was the whole point anyway, so I’d call this experiment a success.
I’m using SFML for pretty much everything in my game engine. I started using it to help me learn C++ and never left. It’s pretty amazing; cross-platform, hardware accelerated 2d and one of the best APIs I’ve ever used. It’s a genuine pleasure to work with when it comes to expressiveness and simplicity.
I’ve recently been working on adding rich-text support to my engine, and while SFML does come with text rendering built in, it’s very simplistic. A text instance can only have one color and combination of styles in total, so if want to italicize a single word in a sentence you’re out of luck; you’ll have to split the text up into multiple chunks and position them independently. I’ve written a class that handles this all automatically, along with a simple markup parser to manage formatting. You can get the source here, and discuss it over at the SFML forums on the topic I made for it.
I’ve done some parser work before (notably for Slang), so the markup interpreter was fairly straightforward. All formatting is done with single-character delimiters (*bold*, ~italic~, and _underlined_), and the color tags can contain a hex value or color name. I’m sure it can use some optimizing, but so far it’s quite fast and works exactly as well as the vanilla Text class.
If you haven’t used SFML before, I highly recommend checking it out! The fact that I was able to put all of this together in just one day is a testament to the power and ease of use that it provides.
My graph system is working nicely with multiple resolutions, but I realized while working on it that I’ve essentially locked myself into having every scene the same size. While not necessarily a problem, from a design standpoint it really isn’t conducive to the type of levels I want to implement. Of course, allowing scenes to exist that are larger than the size of the screen introduces several new problems, namely:
- The camera needs to keep all actors in view at all times
- Additional detail will be needed to surround the accessible area
That doesn’t seem too hard, right?
Easy stuff first: The camera is going to need to center itself between all points I tell it to focus on. All I have to do is get the average value of all positions. Fortunately this actually happened exactly as I thought it would.
Now on to zooming. Essentially, I check distance between the two focus point furthest from each other. If it’s over a certain length, zoom out. If it’s under, zoom in. Stop zooming in once the view is a certain width, so it doesn’t zoom in to eternity.
Seems simple enough, right? That’s what happens when I write up a blog post after the fact. I really should start writing these while I’m in the thick of things…