Memory Usage Optimization

Whether you're a newbie or an experienced programmer, any questions, help, or just talk of any language will be welcomed here.

Moderator: Coders of Rage

Post Reply
blueshackstudios
Chaos Rift Newbie
Chaos Rift Newbie
Posts: 5
Joined: Mon May 12, 2014 1:15 pm

Memory Usage Optimization

Post by blueshackstudios »

Hey guys, I'm a little confused and I know this is a frequent topic, but hear me out.

Our game engine has been doing very well and memory usage is no problem on a PC. It is a 2D game based on OpenGL and SDL

The problem I'm having is I have started getting into Dreamcast development and my primary concern is getting under that 16mb memory limit. My game is currently hitting around 28 mb for a medium sized level, which makes me wonder how it's even possible. I know I can look into texture compression ( right now I'm using uncompressed PNGs ) but I'd really just like to hear any tips, advice, or good practice that will help me stay consistently under the limit.

Thanks guys.
ac3t1ne
Chaos Rift Newbie
Chaos Rift Newbie
Posts: 9
Joined: Sun Aug 17, 2014 1:27 am

Re: Memory Usage Optimization

Post by ac3t1ne »

As far as I know, textures are stored in a separate 8mb of vram, so that should help you out some but yeah reduce the colours on all your pngs can make them tiny, other than that you could ask yourself do you really need to use doubles where a float might be fine (say you are only accounting for a couple of decimal places anyway) or a short might be fine for a variable that is only used for an enum. Things like a title menu class should be dynamically allocated so that they aren't in memory when the game is etc.

Sorry if I am telling you how to suck eggs!
X Abstract X
Chaos Rift Regular
Chaos Rift Regular
Posts: 173
Joined: Thu Feb 11, 2010 9:46 pm

Re: Memory Usage Optimization

Post by X Abstract X »

It's hard to suggest much without knowing the specifics of your game. I don't have experience with Dreamcast but I'm assuming you're currently using 32-bit textures? Try reducing to 16-bit or 8-bit palleted textures if possible.

Also make sure you're freeing your local copy of image/pixel data after you create your textures with OpenGL.

If possible you should partition your levels and only load up the assets required for the current section and possibly adjacent sections.

Edit: I saw on your blog. There's no reason you should be using so much memory in a tile-based game. Any chance you're loading multiple copies of the same tile type?
User avatar
dandymcgee
ES Beta Backer
ES Beta Backer
Posts: 4709
Joined: Tue Apr 29, 2008 3:24 pm
Current Project: https://github.com/dbechrd/RicoTech
Favorite Gaming Platforms: NES, Sega Genesis, PS2, PC
Programming Language of Choice: C
Location: San Francisco
Contact:

Re: Memory Usage Optimization

Post by dandymcgee »

You may also have to limit the size of your tile maps to help address this issue. I know Elysian Shadows separates "Levels" into chunks called "Areas". If load times become a problem, you can pre-fetch adjacent chunks to help levels appear more seamless.

As others have mentioned, paletted textures and texture compression are going to be a big deal on a console with limited VRAM like the Dreamcast.

Aside from that, since you're using C++, you may want to supplement excessive use of STL data structures with lighter C alternatives. For instance, if you don't need the extra functionality / overhead of std::vector, use a simple array or custom linked list in its place.

If you're really pressed for memory, you can use more advanced data packing and bit manipulation techniques to make every structure take up as few bits as possible. Don't start here though, there are likely more efficient ways to reclaim memory. As always, the best way to find opportunities for optimization is to profile your code.
Falco Girgis wrote:It is imperative that I can broadcast my narcissistic commit strings to the Twitter! Tweet Tweet, bitches! :twisted:
blueshackstudios
Chaos Rift Newbie
Chaos Rift Newbie
Posts: 5
Joined: Mon May 12, 2014 1:15 pm

Re: Memory Usage Optimization

Post by blueshackstudios »

Well, I have made sure that all of my variables are the proper types, i.e. short/unsigned etc.

I have not compressed my textures yet but I made sure that I am clearing the CPU copy after uploading them to the GPU.

My main concern is RAM rather than VRAM at the moment. I'll try some of your suggestions and see how much it helps, thank you.
Fillius
ES Beta Backer
ES Beta Backer
Posts: 11
Joined: Fri Feb 01, 2013 7:53 am

Re: Memory Usage Optimization

Post by Fillius »

Aside from that, since you're using C++, you may want to supplement excessive use of STL data structures with lighter C alternatives. For instance, if you don't need the extra functionality / overhead of std::vector, use a simple array or custom linked list in its place
I am very sorry, but more or less unfounded claims of the (memory) inefficency of C++ data structures compared to "lighter C alternatives" are kind of a pet peeve of mine, and generalizing statements like this, without explanations as to what exactly may potentially be less efficient, make too many a new programmer believe in something akin to Linus Torvalds misguided opinion of C++(imho). Therefore I will elaborate on that a little ;-):

The "housekeeping" overhead of std::vector compared to a (dynamically allocated) C-style array is negligible. Whilst the exact size of a std::vector(minus the actually stored data) is implementation defined, it is usually the size of 3 pointers(one to the beginning of the allocated memory, the size of it, as well as the size of the really used parts).

The Code

Code: Select all

std::cout<<sizeof(std::vector<int>);
outputs "12" when compiled for my Dreamcast(and 24 on my Laptop, which is consistent with the 3 pointers mentioned).

Considering the fact that one would at least need some variable to hold the size for a simple dynamically allocated array(assuming that size is not known in advance and unchanging, in which case a statically sized array would propably be the right choice(or maybe, using C++11, a std::array)) the size of the required management data would be sizeof(int*)+sizeof(size_t), which is 8 for my DC. This reduces the actual "overhead" of std::vector to 4 byte and the very moment you intend to preallocate more memory than is currently used, in order to reduce the number of allocations necessary, you will need another size_t and use exactly the same amount of memory as a std::vector would, whilst having to duplicate most of its functionality. This intruduces countless opportunities for bugs and further performance problems, which could have been avoided by simply utilizing the usually perfectly acceptable STL implementation(I, personally, would have a really hard time trying to surpass or even match their efficiency without expending considerable time and effort).

There are, however, two pitfalls when working with std::vector that might really waste quite a lot of memory:
  • 1. In order to provide the promised amortized O(1) complexity of some members(push_back, emplace_back), std::vector must allocate more memory than is immediatelly used when its current capacity is exceeded and one tries to append to it.
  • 2. Even when removing all elements from it, there is no guarantee that the now unused memory is freed.
The first one can be addressed by using the reserve or resize methods and telling the vector exactly how much capacity is needed(Admittedly, the Standard does not guarantee only that much memory is allocated, but most of the time it is. And in those instances it isnt, there is usually a good reason).

The "traditional" way of solving the second problem was swapping the vector with a temporary one, as outlined here. When using C++11, however, you could also use the new shrink_to_fit method.

(In case you are concerned about other efficiency aspects(aside from memory), take a look at this excellent stackoverflow answer)

As for using a (custom) linked list in place of the vector, I would (in most cases) advice against it. Even the most trivial, lightweight implementation of a singly linked list requires at least one pointer per element, making the "housekeeping" overhead O(n)(in terms of memory). Furthermore, in terms of speed, lists can be rather problematic(for various reasons. I would recommend a watching of this excerpt of Bjarne Stroustrup's keynote at GoingNative 2012, as well as a reading of this).

So concluding, I would like to emphasize once more that STL containers are not necessarily more heavy-weight, memory wasting or slow compared to C alternatives, although they of course can be if used carelessly. As can their C alternatives.

As always, I hope I did not offend anyone(and apologize if i did) and do beg your forgiveness for the long text.
User avatar
dandymcgee
ES Beta Backer
ES Beta Backer
Posts: 4709
Joined: Tue Apr 29, 2008 3:24 pm
Current Project: https://github.com/dbechrd/RicoTech
Favorite Gaming Platforms: NES, Sega Genesis, PS2, PC
Programming Language of Choice: C
Location: San Francisco
Contact:

Re: Memory Usage Optimization

Post by dandymcgee »

@Fillius: Your detailed expansion on my (admittedly vague) comment is most appreciated.

In any case, the minor optimization details are all irrelevant if the developer doesn't bother to profile their code.
That, in my opinion, is the single most important piece of advice to take from this (or any) discussion on memory *or* performance optimization.

Find a profiler you like, and use it.
Falco Girgis wrote:It is imperative that I can broadcast my narcissistic commit strings to the Twitter! Tweet Tweet, bitches! :twisted:
Post Reply