Tag Archives: sfml

7dfps

7dfps 2013 : Writing a raycaster

When the first 7 day FPS challenge was launched in 2012, I remember wishing I could participate but knowing full well that I didn’t have the skill to make anything of it. I’d never done any serious 3d programming and had no experience with the asset pipeline required for a 3d project. I decided I’d learn how to use a simple 3D engine in my free time over the following year and participate next time.

Well, it’s one year later, 7DFPS is back in business, and I still have no experience in any of the above areas. I had already resigned myself to missing out again, but I happened to stumble across this article on writing a “pseudo-3d” engine such as is used in Id Software’s early games, including Doom. I decided to give it a shot. Here’s my progress for the first two days.

1

The first step is figuring out where the walls are. The above screenshot shows that not happening.

 

2

A bit of an improvement.

 

yays

Wall segments are being drawn, but in completely the wrong places.

 

omgyays

Segments are now positioned correctly. I changed the texture to a standard pink/black checkerboard for debugging, but it isn’t showing up right. In addition, the perspective is “fisheyed”, causing curved lines at the periphery of the field of view.

 

nofish

Better.

 

perfect

The elusive “perfectly textured wall” in its native environment.

 

sexyhotness

Some awesome modern art I created while trying to fix textures.

 

textures

The final result as of this writing. At this point the engine is basically done and I can get to work on some gameplay.

slime

Game design by necessity

I started work on a game called Gunbuilding about a week ago with the purpose of stress-testing #Punk, a C# port of Flashpunk that I’ve been developing. Nothing puts a framework through its paces like using it for a game jam, and if nothing else I ended up fixing a lot of bugs that would have inevitably bitten me later on. On the downside, though, my idea for the game changed drastically to fit inside my schedule, and I’m not thrilled with the way it came out.

If you want to play it, go here.

The first thing to go was the mechanic that gave the game its name. I created a system that assembled bullets by passing data through a set of components that could be swapped out at any time. In theory, this could have allowed for a crazy number of combinations, resulting in bullets that homed in on enemies and spawned others to ricochet around after impact. The system technically works as it is, but I didn’t have time to create more than one component for each category. I’d like to explore this type of system again in the future, though; it seemed like it had potential to be a lot of fun.

The next feature to be cut was the enemy AI. I’m pretty happy with the way my little guys hop around (the quadratic curve movement system they use was one of the first holdups I encountered), but they don’t do anything to avoid each other and always ended up clumping together into a group as they moved towards the player. As a solution (though at the time it was a joke) I made them explode when they touched each other, and then made that explosion chain to other nearby enemies. The chaining was super simple and easy to do thanks to #Punk’s extensive message broadcasting capabilities, and it turned out to be a lot more fun than the approach I had been using, so in one sense I’m glad I ran low on time to implement the rest of the game systems.

The last thing I didn’t have time for was to put any effort into graphics. Everything in the game is made up of colored squares, with the  exception of a grass tile I used for the background and a tiny image for the particle effects. I generally try to prototype with as few art assets as possible to avoid getting stuck perfecting them before any gameplay is in place. In one sense that worked out this time — I can’t imagine what else I would have had to cut had I spent all kinds of time on art early on — but I feel like shipping the game in such an incomplete state is a real shame.

Overall I’m disappointed with the way the game turned out, but this was never about the game. I’m pleased with the number of bugs I was able to fix in the framework, and that was the whole point anyway, so I’d call this experiment a success.

RichText in SFML

I’m using SFML for pretty much everything in my game engine. I started using it to help me learn C++ and never left. It’s pretty amazing; cross-platform, hardware accelerated 2d and one of the best APIs I’ve ever used. It’s a genuine pleasure to work with when it comes to expressiveness and simplicity.

I’ve recently been working on adding rich-text support to my engine, and while SFML does come with text rendering built in, it’s very simplistic. A text instance can only have one color and combination of styles in total, so if want to italicize a single word in a sentence you’re out of luck; you’ll have to split the text up into multiple chunks and position them independently. I’ve written a class that handles this all automatically, along with a simple markup parser to manage formatting. You can get the source here, and discuss it over at the SFML forums on the topic I made for it.

riches

Here’s a screenshot of the class in action, with the text source above.

I’ve done some parser work before (notably for Slang), so the markup interpreter was fairly straightforward. All formatting is done with single-character delimiters (*bold*, ~italic~, and _underlined_), and the color tags can contain a hex value or color name. I’m sure it can use some optimizing, but so far it’s quite fast and works exactly as well as the vanilla Text class.

If you haven’t used SFML before, I highly recommend checking it out! The fact that I was able to put all of this together in just one day is a testament to the power and ease of use that it provides.

Building an intelligent camera

My graph system is working nicely with multiple resolutions, but I realized while working on it that I’ve essentially locked myself into having every scene the same size. While not necessarily a problem, from a design standpoint it really isn’t conducive to the type of levels I want to implement. Of course, allowing scenes to exist that are larger than the size of the screen introduces several new problems, namely:

  • The camera needs to keep all actors in view at all times
  • Additional detail will be needed to surround the accessible area

That doesn’t seem too hard, right?

Hah!

Easy stuff first: The camera is going to need to center itself between all points I tell it to focus on. All I have to do is get the average value of all positions. Fortunately this actually happened exactly as I thought it would.

Now on to zooming. Essentially, I check distance between the two focus point furthest from each other. If it’s over a certain length, zoom out. If it’s under, zoom in. Stop zooming in once the view is a certain width, so it doesn’t zoom in to eternity.

Seems simple enough, right? That’s what happens when I write up a blog post after the fact. I really should start writing these while I’m in the thick of things…

Resolution independence

Planning posts are all well and good, but what happens when you end up going in an entirely different direction?

I ended up creating a new class, MN::Window, that inherits from sf::RenderWindow but adds some useful members to determine resolution and pixel scale ratio.

Now, when I’m constructing the pathing graph, each cell is WindowWidth/20 by WindowHeight/15. I still have a 20×15 grid and an array of Vector2fs, but they’re static to save on memory. I can also access grid locations from anywhere in the program, so all objects in the world will use graph indices for positional values from now on.

Previously I had been developing Meander to run in a window at 800×600. Besides the issue with hard-coded window dimensions, it also created the problem that resizing it larger (to, say, 1920×1080) stretched the images out from a 4:3 ratio to 16:9. And no, they’re not the same. To fix this, I started targeting a 16:9 resolution by default. But what happens when I want the display to be smaller? I added a few members to MN::Window that help me solve this problem.

To display correctly on a lower resolution, all the sprites on the screen need to be scaled by a factor of WindowDimensions/OriginalDimensions. For example, a transition from 1920×1080 to 1366×768 works out to this:

 

I’m no good at math. I may seem like it sometimes, but it’s all an act, I assure you. I don’t know if the scale ratio for any resolution of 16:9 will result in x === y. But just to be sure I’m going to leave both equations in.

This tells us that each sprite needs to display on the screen at 0.7 of its actual height and width. My sprite manager can take care of that each time the resolution changes.

Whew! Got that checked off the list. I made myself promise I wouldn’t play any video games until I’d finished. Time for a rest. :D

Fixing the graph – planning stage

Goal number the last on my roadmap reads: “Decoupling the navigation mesh from specific window sizes”. What on earth does that mean? I’m glad you asked.

Thus far I’ve been programming and debugging my program using a fixed-size window at 800 by 600 pixels. Unfortunately, when I wrote my Graph implementation I was all about magic numbers and not so much about planning for the future. Consider the following code snippet:

At some point I got it into my head to store exact locations in the graph. Now, there’s not anything necessarily wrong with this…as long as

  • I never need to change the window size or resolution
  • The view never changes

Obviously this is unacceptable, and really should have been fixed by now. I need a plan!

Currently the Graph class is made up of:

  • The Walkable list; a set of integers acting as boolean (0/1) variables (see here for why a vector is A Bad Thing)
  • The Node list; a set of Vector2s for positioning objects
  • The State list; a set of rectangles used to map mouse positions to nodes

Looking at this I realized that I really don’t need Nodes or States to be in there. After all, my pathfinder’s solver function only takes two numbers as arguments; the grid indices for the start and end of the path. At each step of the path, the pathfinder accesses the Node list with that step’s index and returns the position. I can easily strip this out and replace it with a simple function to convert a graph index to a Vector2. I’ll need a mesh to determine where on the screen the mouse clicks to signal the start of a pathfinding call, but I can most likely construct this on the fly.

No more delays. Time to get this done.