Reducing size of Rotorz data before building scenes (version control)

Ash Blue
New Member
Registered: 2017-04-28
Posts: 8

Topic

For our project (http://playadnc.com) we have some very large 2D scenes. The source files (before building) are getting pretty large due to massive grids (500x100 resulting in around 20mb added for Rotorz scene data). It's starting to become a concern for version control. Is there a set of options I can tick to reduce the impact of a pre-built Rotorz grid in a scene?

Lea Hayes
Rotorz Limited
From: United Kingdom
Registered: 2014-03-04
Posts: 638

Response 1

Massive grids are naturally going to use a lot more memory. If you have large empty areas then you might find using a smaller chunk size helps to reduce data storage requirements since entirely empty chunks do not need to be stored.

When working with large data files it's a good idea to consider using something like LFS (in the case of git) since many source control systems are not really intended to be used with large data files.

I hope that this helps!

Ash Blue
New Member
Registered: 2017-04-28
Posts: 8

Response 2

Actually already using LFS, I'll have to selectively check-in larger files then. Also I'll try to use smaller chunks. Do you have a recommended chunk size? Thinking based on our usage something between 20 and 50. Also what are the cons to smaller chunk sizes? Couldn't find any specific details in the docs (might have missed them though).

Lea Hayes
Rotorz Limited
From: United Kingdom
Registered: 2014-03-04
Posts: 638

Response 3

If you are using procedural 2D tilesets then the optimal chunk size for render performance tends to be 100x100 because you can find that you have fewer draw calls. Reducing the chunk size means that you'll increase the chances of having entirely empty chunks. When a chunk is entirely empty it ceases to exist in the higher level tile system data structure and thus the internal data structure can have gaps.

Tile System
 |-- Chunk
 |     |-- TileData[100 * 100]
 |-- Chunk
 |     |-- TileData[100 * 100]
 |-- Chunk
 |     |-- TileData[100 * 100]
 |-- null

If the tile system was 400x400, and there were no tiles in the last quadrant,
you have a saving of sizeof(TileData) * 100 * 100.

It's about finding a balance between draw calls and reduced data structure size when there are large areas of emptiness.

I would suggest using the profiler to test the performance in your specific use case with different chunk sizes since performance is a very use case specific thing. Tile system size, chunk size, art complexity, shader complexity, etc. are all factors to your overall performance.

Ash Blue
New Member
Registered: 2017-04-28
Posts: 8

Response 4

That's really good to know. Is there perhaps a build option that would allow chunks to be optimized on build (perhaps there is one and I'm justing missing it)? Lowering the chunk size significantly reduces my file sizes (even at 50x50 chunks), but I don't want to sacrifice performance.