Well, writing a custom memory allocator that is more efficient than the default one is very difficult. Object data is not frequently created or destroyed in the engine itself and Lua would not use the pool anyway, rendering the new allocator nearly useless. In addition has to be said that this memory allocator would not be cross platform, meaning that there would be one for Mac/Linux, Windows, Android, Emscripten etc. which is unmaintainable for me.
Do you really have your bottleneck at allocating memory?
The biggest performance improvement is probably found in the multi threading mechanism. There has to be much more happening parallel which currently is blocked by the absence of a working IPC mechanism like messaging. Another thing is moving all visibility testing to the GPU using OpenCL on platforms supporting it. Here again: How much is there to gain? Is it worth the time to develop and test it?
Before making any assumptions about optimizations we need real data which show the bottle necks. To do something simply because it sounds cool or because someone else did it does not mean it’s of any use in our case.
Because of that we need more profiling options. I already have some thoughts on how to integrate them into the work flow from an UI stand point and how to integrate them into the engine to show accurate data.
I think the biggest challenge (at least for 1.0) will be bringing the Android (and future Emscripten) port on par with the PC version.
I want to remove the old Maratis editor completely with all dependencies we don’t need any more from the repository. That would include the MGui library which is not used by Neo anymore. What do you think about that?