As the end of my studies is nigh, I’m looking into possible topics for my master thesis. Alas, I’ve grown a bit tired of writing game engines and renderers lately — who’d have thought.
Luckily, this quest for new shores topics was solved quickly. I’ve also developed quite an interest in procedural generation and always had a bit of a fascination for fantasy worlds and maps. So, I’ve decided to work on procedural world generation.
This is obviously going to be a fun project with much potential for interesting failures.[1] Therefore, I have the best intentions of documenting my plans, progress and interesting problems here. With best intentions I do of course mean “force myself” and with documenting I mean “dumb my thoughts here and try to make sense of it all here”.
Without further ado: Welcome to this unwise cocktail of a forever project and a master thesis.
Before we can get started with the actual generation/simulation algorithms we first need to decide how to store our data, i.e. how the world is represented in the data-model. Given that it’s not clear what algorithms, we will use later and what their requirements will be, we’ll want to choose an extendable model, that won’t restrict our options later.
In the last post we’ve looked at how we can store our geometry, how we can traverse the resulting mesh and how we’ll later store our actual data. But one aspect we haven’t looked at yet is how such a mesh data structure can be constructed in the first place, i.e. how we get from a soup of triangle faces to a quad-edge mesh.
There are two basic operations defined in the original paper to construct and modify meshes:
MakeEdge: Creates a new edge e and its corresponding quad-edge
Splice: Takes two edges a and b and either connects them or splits them apart, if they are already connected.
While these two operations are powerful enough to construct any quad-edge mesh and apply any modification we might want, they are not ideal for our use-case. First they are rather more complex to use, than I would like for everyday use because they require quite a bit of knowledge about topology and use concepts that are not as easy to visualize as faces in a triangle mesh. And secondly, they allow for structures that we don’t actually want to support, like non-triangular faces.
So, what we will do instead is define our own operations, to make our use case as easy as possible. In this post, we will first look at just the construction of a new mesh, for which we’ll define two functions:
This will (hopefully) be the last post about geometry and the Mesh-API, before we can start with the actual generation algorithms.
We’ve already defined a basic data structure for our geometry, implemented algorithms to traverse it and, in the last post, constructed a spherical Delaunay triangulation of random points, distributed evenly on a sphere.
So, the only part of the API that is still missing are functions to modify existing meshes. The three fundamental operations we’ll define for this are:
class Mesh {
public:
// ...
void flip (Edge e);
Edge split (Edge e, float split_point);
Vertex collapse(Edge e, float join_point);
};
Well, that was a bit of a longer pause, than I had hoped. So much for my “best intentions of documenting my progress”…
But I’ve finally managed to implement a working prove-of-concept for the plate simulation to generate terrain, also it’s still a bit rough around the edges.
While there are still many problems that need more work[1], the general approach appears to work, at least.
But one thing that I noticed now is that the current tooling to visualize the results is not enough to test and debug the algorithm. So, before I continue working on the algorithm itself, I’ve decided to refactor and improve the editor to better visualize the generated data and allow users to modify the intermediary results. But first, it’s probably a good idea to document how the current implementation of the plate simulation works[2].
The first aspect we will have to take a closer look at, is how we can generate our starting point, i.e. the initial state before the simulation is run.
We’ve already discussed the first half of that in a previous post, where we’ve generated a simple triangle mesh of a sphere. So, I’ll first recap the most important part of that and then dive into how we can populate that mesh with the information we need for the actual simulation, like plate-IDs, plate-velocities and elevations.
Now that we have a way to generate some (albeit a bit dull) tectonic plates, we can use an iterative algorithm that simulates plate movements and interactions, to refine it into (hopefully) realistic looking landforms.
Because the simulation itself is relatively complex[1], I’ve split it into three separate posts:
Moving the plates and determining how they should interact with each other (this post)
Updating the mesh after vertices have been moved, which includes the creation and destruction of crust material on plate boundaries, as well as detecting and handling collisions between plates
Simulating large-scale plate interactions like rifting and suturing