Copyright © 2003, 2004 Gene Michael Stover. All rights reserved. Permission to copy, store, & view this document unmodified & in its entirety is granted.
I've been thinking about writing graphics-intensive games & simulations in Lisp. Here are thoughts about it. Much of what I discuss is language-independant, not specific to Lisp. Most of what I discuss is not yet implemented (by me, anyway). It's just thoughts & speculations.
I'm talking about games & simulations that use an event loop & that may have significant, real time graphical output.2.1 So I'm not talking about a chess game; that's turn-based, not event-based. I'm talking about shooters, platformers, ballistic simulations, artificial life, molecular simulations, & things like that. In the simulation world, I believe these are called ``discrete event simulations'' as opposed to ``continuous simulations''.
Both simulations & games have an event loop & an event queue. The worlds within them contain a time continuum.
So what's the difference between a simulation & a game? In a simulation, the inner world's time continuum should progress as fast as possible. In a game, it should be connected to the real world's time continuum.
If a simulation is showing you an animation, you want it to produce that animation as quickly as possible. If you want to watch the animation in real-time, then the simulations should save it to a movie file that you can play at your convenience later. Also, if a simulation can't generate its data as quickly as real time, you want it to save the results to a movie file so you can watch it in real time later.
A game is interactive, & its time continuum is tied to reality's time continuum so the user can experience it now, in interactive time.
I guess you could say that a game has tighter performance constraints because it is interactive. This does not mean that a game must be more efficient. A simulation should be efficient so it can do as much work as possible in the time you can spend waiting for it. A game's performance constraints are tighter because they have well-defined lower & upper bounds. The lower bound on a simulation's performance constraint is probably more fuzzy because, depending on the domain, it might be okay to wait longer than real time for the results (which you'd then replay as a movie in real time). A simulation doesn't have an upper bound on performance. So simulation programmer does not necessarily have easier performance constraints than a game programmer; a game is bounded on both ends.
I'll probably interchange the terms ``game'' & ``simulation'' from now on.
I'd like to write my games in Lisp.3.1
So why a game in Lisp? The current vogue architecture for games is to implement primitives in a low-level language, then implement the game logic & flow in a script language.
As an experienced Lisp programmer3.2 can see immediately, Lisp is perfect for that script language. It's lightweight, powerful beyond the imagination of imperative programmers, it's an excellent data-description language, and it's already standardized, implemented, & debugged. Lisp is so perfect for the task of a game's script language that any game programmer who doesn't use Lisp for his script language is missing out.
I could create a Lisp library that binds to OpenGL, but OpenGL has over 200 functions, different Lisps have different foreign function interface libraries, & different OpenGL implementations might be subtly different enough to vastly complicate the task of creating the Lisp OpenGL bindings library. I don't want to spend the rest of my life writing & maintaining that library.
Also, though Lisp is vastly more efficient than most people know4.1, it isn't as fast as can be, but achieving 30 graphics frames per second requires much of the computing power that's available. It's not a place where you want dynamic typing & other nifty features to get in the way of performance.
So what's an alternative?
An alternative is a C library whose semantics are designed for the semantics of the application. So the semantics of the C library are not concerned with polygons & display lists. They are concerned with the location of the monsters, the position of the camera & the character. They allow the Lisp driver program to tell C when a missile was fired, when it hit something, & what type of explosion occurred.
This solution solves all the problems.
It's probably fairly small (far less than OpenGL's 200 functions).
Because it deals with high-level semantics within the program, one function call can accomplish a lot of work. The Lisp program could read user input in one C function call, report the avatar's new position in another, report a monster's new position in another, report the camera's new location in another, & call a final function to render the whole thing.
If the Lisp & C parts of the game could share the data structure that represents the world, then maybe Lisp could keep that up-to-date & call a single C function to render the world every of a second. It'd still need to call C functions to user input. Oh, & there will always be menus & such.
So the C library's interface for Lisp is simple enough for me to implement as part of a game programming project rather than the whole new, life-encompassing programming passion of creating a Lisp interface to OpenGL.
The C library improves efficiency, too, because it moves all of the graphical work into C. I prefer Lisp because it's better for manipulating algorithms, but graphics algorithms aren't fun to manipulate. They are complex, tedius, trigonometric algorithsm. They are well-understood & well-documented. It's perfectlyl feasible to leave them in C.
Finally, the C library improves portability across OpenGL libraries. Well, I can't be sure that it will, but I'm betting that it will. There may be subtle differences between OpenGL libraries, but I'd bet that most OpenGL implementors put reasonable effort into testing their libraries with C. I get the best assurance of portability from one OpenGL library to another if all my code which deals with OpenGL is in plain, vanilla C. I'll probably need to relink the program if I switch OpenGL libraries, but I feel safer with this technique than with an OpenGL interface for Lisp.
If all the graphics code is in C (in terms of OpenGL), & the logic, flow, & AI of the game is in Lisp, where is the event loop?
Background: I have an event system framework which I use when I write simulations in Lisp. I plan to use that in my more graphical games.
If I used GLUT's glutMainLoop, I could hook into Lisp with GLUT's idle function. So my program would work like this (pseudocode):
The idle function that is used by the pseudocode above would handle whatever events in the Lisp event queue that are ready. (That event queue holds game events as opposed to system events, such as keyboard & mouse.)
Something I dislike about this is that control never returns to Lisp. For all I know, Lisp needs to have total control periodically to run the garbage collector. Besides the idle call-back, the nearly all C functions which GLUT called for keyboard & mouse events would need to call Lisp functions; it could result in a spaghetti mess of event-driven C code which called event-driven Lisp code. Ick. Also, I hate it when a program cannot exit by returning from all functions.
A more well-behaved solution which allowed Lisp to have control & run the event loop. This pretty much precludes GLUT, but for a serious application, I understand that programs probably need their own event loops, anyway.
A program using the custom event loop would work like this:
In this newer algorithm, the difference between a game & a simulation is that a simulation always sets before checking for system events.
One of the events will surely be a frame event. For that, Lisp calls the C function which draws the world. Other events probably change the state of the world.
Since the game program will run on different computers with different amounts of processing power, it'd be nice if the program could adjust its own frame rate. Some computers might be able to handle over 30 frames per second, but it might be asking a lot for others to manage a third of that. I'd be nice if the program worked as well as possible on both machines by adjusting its own frame rate.
So what's a light-weight way for the program to adjust its own frame rate? We don't want the algorithm to be expensive because that would reduce the maximum frame rate itself.
Thinking about the pseudocode in Figure 5.1, look at the steps where we determine the time until the next event & then call a function to check for system events. If it frequently happens that is large, then we could increase the frame rate. In this context, what are the meanings of ``frequently'' & ``large''. I don't know yet.
On the other hand, if when the program is ready to process events from the event queue, the first event is frequently passed-due, being handled a little late, then maybe the program should reduce the frame rate. Maybe I should modify that to say that if the next frame event is passed-due, we should reduce the frame rate. It makes sense. There is a certain rate at which a given computer running the program can draw the graphics. It doesn't matter if the program schedules frames more frequently; the hardware can only go so fast. So when the program notices that frame events are handled late, it can reduce the frequency of the frame events, a little at a time, until the requested rate reduces to the rate that the hardware can deliver. The frame rate won't actually change. Instead, the number of frame events in the queue will be reduced until they match the rate that the hardware can handle. Fewer events in the queue means (slightly) better performance for other parts of the program.
Since I still don't know the precise meanings of ``frequently'' & ``large'' in the previous section, but I know how to implement the frame-reduction algorithm, I guess the program could start with a really high frame rate, like 60 frames per second. If a computer can keep up with that, then it's fast enough, & the program doesn't need to increase the frame rate. More likely than not, though, a computer these days6.1cannot keep up with a frame rate of 60 per second, so the frame rate reduction algorithm I described would take effect, reducing the frame rate until it matched what the computer could deliver.
No, that won't work in all cases. I guess it'd be great for a while, but what would happen if the program entered a phase where there was so much action happening that the frame rate had to reduce. The frame rate reduction algorithm would do that fine. Later, the program moves to a phase where little is happening, so the frame rate could increase, but it wouldn't. Without a feature to increase the frame rate when applicable, the program's frame rate will decrease until you stop it & restart it. If you run into some part of the game where there is a ton of stuff happening & the frame rate has to reduce drastically, then you'll be stuck with a horrible frame rate for the rest of the game. So I need an algorithm to increase the frame rate.
I notice that one situation which can arise is that the last event processed was a frame event, & there is time before we process the next event (frame or otherwise). That situation does not necessarily indicate that the frame rate should increase. If the last event was a frame, then increasing the frame rate so that the next event is a frame is not useful because the world does not change between the frames. I guess a rule of thumb is that you want events to occur between frames. It's a waste to redraw two frames without events between them. So when your frame rate is high enough that you get about one frame event for each non-frame, then your frame rate does not need to increase at all.
So when should you increase the frame rate? I guess you could increase the frame rate when frames are handled on time & there are two or more non-frame events between frame events.
Ah, so maybe you put a rule in the event loop like this:
Might want to put some other counter in there so we let some statistics accumulate before changing the frame rate. Seems like 1 second might be too long, but you don't want to waste time by oscillating the frame rate for each frame. Then again, maybe it won't hurt.
Removing & inserting events into the queue is costly, so I don't want to change the frame rate by canceling a frame-event task & creating a new frame-event task with a different period. Instead, each frame event could explicitly post the next frame event. There could be a global variable that tracked the time between frames. If we need to increase the frame rate, decrement the period variable. If we need to decrease the frame rate, increment the period variable. Then post the next frame event using the period in the variable.
Hmmm... Maybe this is all overkill. Since there is a maximum frame rate the computer can support, & the hardware will deliver that frame rate no matter what the program does, maybe the program should request a reasonable frame rate (like 30 fps). It could post the next frame event while handling the current one, & when it posts that event, we could use as the delay between frames. If the computer can deliver 30 frames per second, then a delay of 0.033 second between frames will be nearly 30 frames per second. If the computer cannot deliver 30 frames per second, then the frames will be late, & the 0.033 second between frames will allow other types of events to happen.
So if I set the duration between frames to something reasonable, then the actual capabilities of the hardware will provide a self-regulating frame rate mechanism. Simple. If the hardware is fast enough, great. If not, it'll deliver frames late but at a consistent rate. I didn't have to write any code to deal with it explicitly.
The only problem would be if the hardware was woefully inadequate. By using a delay between frames, there will be a certain number of non-frame events between frames, regardless of the rate. If frames are late, the other events will be late, too. But what if the hardware were just barely capable of keeping up, not quite capable? In that case, if I could reduce the frame rate a little to remove frame event clutter from the event queue, then maybe the frames & the workings of the simulated world would be better. As usual, though, this is a borderline, special case. If a machine is fast enough, it's fast enough. If it's too slow, it's too slow. Adding extra code to deal with these situations just warps the dividing line to let some computers into the ``fast enough'' category while excluding others. Simpler is usually better.
Still, it'll be neat to see how well the delay between frames technique works. If it doesn't work well, I could use the explicit rate-adjustment algorithms for the frame rate.
Then again, it seems sloppy to schedule frames when they will be late, & like I said, the frames themselves will contribute to making the other events late.
I don't know. I don't know what to do.
Both the Lisp (logical) & C (graphical) portions of the program need to know about the state of the world. The world changes in the Lisp part of the program, & the C part of the program must draw it. How should they communicate?
I guess the simplest communication is for the two parts of the program to share a data structure that holds the state of the world. In theory, the C part of the program would need just one function, draw_world, which would deduce the graphical state of the world from the shared data structure & would draw it.
It could be expensive to scan the entire world data structure. Maybe the parts that have changed could be flagged or kept in a secondary data structure that was accessible quickly. Even so, when it's time for a frame, we have to call a single C function which does a lot of work. Maybe it would be better if the communication between Lisp & C could amortize the cost of drawing the world.
An interface that could amortize the cost of drawing might provide ways for Lisp to notify C of what has changed, as it changes. When the camera moves, Lisp calls a C function. When the avatar moves, call a C function. Here is a list of the C functions I can imagine at the moment (& without spending too much time on it):
I'm sure there are others. Some types of communication aren't high-performance, might be more akin to parameters of the drawing functions. Like whether the outdoor lighting is noon, dusk, or night. They are low-bandwidth parameters that might be applicable to some shared data structure.
The idea with the ``this thing changed'' functions is that, when the Lisp/logical system determines that something has changed, it notifies the C/graphical system, which does part of the drawing. Maybe the graphical system can just move a few objects on the display list (or whatever the graphics library uses), or maybe it has to dump everything & redraw from scratch. The idea is that it amortizes the cost of drawing in the hopes of making it more efficient & also making it more deterministic, easier to deliver a frame on schedule.
Here are some links to online articles about game development. I need tomake some comments while I still remember my reactions, maybe subdivide them by topic, too.
It's okay, a worthy attempt, but it doesn't really tell you much. The first part is sort of about DLLs. It claims to be independent of operating system, but then why refer to dynamic/shared libraries as ``DLLs''? It kind of wanders, too, because the first part is about shared libraries, but the second part has little to do with shared libraries. In the second part, it implements (in bad C++ style, I think) a thin interface on top of both Direct X and Open GL. Seems that a better way to make a library-independent graphics library is to implement one in terms of the other. Since I prefer Open GL, I would rather have seen an article about implementing Open GL in terms of Direct X. Aren't there other 3D graphics libraries? Maybe one of them is nicely generic & suitable for implementation in terms of nearly any other 3D graphics library. Hell, even a discussion of why implementing an existing API in terms of another is both a good idea & a difficult prospect would have been better.8.1
A refreshingly, optimistically thought-provoking article about how the current mass acceptance of online shopping, coupled with the high cost of using traditional software distribution channels, might be the environment in which an old software distribution method revitalizes, which might further revitalize software development in the garage.
This is a really elegant article about game design. It's an overview that apparently accompanied an online class. It doesn't tell everything, doesn't go into detail, but - like I said - it's really well-done. I'd call it a ``charming read''. Recommended.
I don't actually want to write a GBA game, but I want to see what the API is like. I just want to see what it's like to write such games.
Appears to be an excellent introduction to MOO programming. ``Introduction'' not because the information is superficial; it's not. It's a good introduction because it covers lots of things. You can get a view of the whole topic from this article & then go learn the other things you need to know.
The definitive work, I would think. Haven't read it (as of 27 July 2003) so I don't know for sure.
This document is available in multi-file HTML format at http://lisp-p.org/taga/.
This document is available in DVI format at http://lisp-p.org/taga/taga.dvi.
This document is available in PostScript format at http://lisp-p.org/taga/taga.ps.
Gene Michael Stover 2008-04-20