Open GL Mesh Class

D.V.D

Make a wish
Reaction score
73
Im trying to make my own mesh class without copying the code from a tutorial and I ran into some trouble. As of now, Im not actually importing any info, I have all the vertices stored in a c++ vector class and I have a render function to render the model. The problem is that once I call the render function, my GLuint VBO gets a value of 0 and apperantly the size of my vertices drops from 5 or whatever value it was before to 0. Im not sure why, I figured it might be something to do with how OpenGL handles buffers since I can't modify these values outside of my class (theyre private). Does anyone have an idea as to why these values get modified and become 0? I call CreateMesh just before GlutMainLoop is called which is were my tutorial creates theyre model. The // after cout functions simply state the values the cout functions display.

Mesh.h

Code:
#include <vector>
#include <assert.h>
#include "Vector.h"
#include "glew.h"
#include <iostream>
 
using namespace std;
 
struct Vertex {
    Vector3f pos;
 
    Vertex () {};
 
    Vertex (const Vector3f &_pos) {
        pos = _pos;
    }
};
 
struct Mesh {
public:
 
    Mesh ();
    Mesh (vector<Vertex>);
 
    void Render ();
 
private:
 
    GLuint VBO; // Vertex Buffer Object
    vector<Vertex> vertices;
 
};

Mesh.cpp

Code:
#include "Mesh.h"
 
Mesh::Mesh () {}
 
Mesh::Mesh (vector<Vertex> _vertices) {
    vertices.resize(_vertices.size());
    vertices = _vertices;
 
    //cout << vertices.size() << endl; // 5
    //cout << VBO << endl; // 3435973836
 
    glGenBuffers(1, &VBO);
    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * vertices.size(), &vertices[0], GL_STATIC_DRAW);
 
    //cout << vertices.size() << endl; // 5
    //cout << VBO << endl; // 1
}
 
void Mesh::Render () {
    //cout << vertices.size() << endl; // 0
    //cout << VBO << endl; // 0
    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
    glDrawArrays(GL_POINTS, 0, vertices.size());
    glDisableVertexAttribArray(0);
}

Render Function:

Code:
Mesh CreateMesh () {
    vector<Vertex> Vertices (5, Vector3f());
    Vertices[0].pos = Vector3f(0.0f, 0.0f, 0.0f);
    Vertices[1].pos = Vector3f(0.0f, 0.0f, 0.0f);
 
    return Mesh(Vertices);
}
 
static void RenderScene () {
    glClear(GL_COLOR_BUFFER_BIT);
    Model.Render();
    glutSwapBuffers();
}
 

s3rius

Linux is only free if your time is worthless.
Reaction score
130
Do you have a ~Mesh() function or any other means of automatically of calling glDeleteBuffers()?

And have you made double sure that the object you're calling Render() on is the same object you get from CreateMesh()?

I don't see anything wrong with this code, so my guess is that you prematurely delete your VBO somewhere else or you "lost" your Mesh object somewhere along the way.

It can't be caused by OpengGL, because OGL has no way of modifying your objects. The fact that VBO and vertices change their content means you have done it.
 

D.V.D

Make a wish
Reaction score
73
Yes I have, the variables are exactly the same. A user over at gamedev told me it has something to do with my use of pointers and I think it might be a memory leak, not exactly sure. Im reading over Pointers and References in my C++ book (its a old book but I think the syntax for pointers and refrences has stayed the same). As to deleting my VBO, i didn't do that before and I added a deconstructor in which I call glDeleteBuffers but Im not sure if thats the only function I need to call. Do I have to set the VBO to 0 or something or will glDeleteBuffers delete my VBO for sure?

EDIT: Incase it helps, here's the post on gamedev, http://www.gamedev.net/topic/640151-mesh-class/
 

s3rius

Linux is only free if your time is worthless.
Reaction score
130
Since I don't have a gamedev account I'll reference a few things here.

RobTheBloke has addressed a few important points, somewhat specific to how OGL works.

You asked
Why exactly is it wrong to create a constructor for Mesh the way I did?

Usually it's not a bad thing. It's a bit strange maybe (usually you'd put that function as a constructor, or a static function), but just fine.

But now you're using OpenGL data. Now it gets more tricky.

Code:
glGenBuffers(1, &VBO);
glBindBuffer(...);
glBufferData(...);

With the first line you tell OGL to create a buffer for you, and write this buffer's id into VBO.
Basically you can think of OGL having an array of buffers. If you call glGenBuffer it'll make one of these buffers available to you and give you it's array index.

Consider the following Mesh class:

Code:
class Mesh{
    Mesh(std::vector<Vertex> vertices){
        glGenBuffers(1, &VBO);
        //Add stuff to the buffer
    }
 
    ~Mesh(){
        glDeleteBuffers(1, &VBO);
    }
};
(basically what yours look, with the added destructor)

It looks fine at first. We create a new buffer everytime we create a new Mesh, and we destroy that buffer when we get rid of our Mesh object.

But what we don't take into consideration are object copies:

Code:
void RenderMesh(Mesh m){ //m is copy of mesh, so m.VBO == 1
    m.Render();
  //<- m will be destroyed, glDeleteBuffers(m.VBO) will be called
}
 
void someFunction(){
    std::vector<Vertex> someData ....
 
    Mesh mesh ( someData ); //Let's say mesh.VBO == 1
    RenderMesh(mesh);
    //<- after m was destroyed, mesh.VBO is invalid aswell.
}

Now we have a problem. When pass 'mesh' to RenderMesh() C++ will actually create a copy of 'mesh' (which is 'm') to use inside RenderMesh().
'm' contains the same information as 'mesh'.
But when RenderMesh() is done it'll destroy 'm' because this copy isn't needed anymore.
This will cause 'm's destructor to be called, which - in turn - will call glDetelBuffers().

But since 'm' and 'mesh' share the same data they also share the same VBO variable value (since it's an exact copy). That means that 'mesh's VBO is now destroyed aswell.


Long story short:

If you put glDeleteXXX() into your destructors you have to be extra careful not to create copies of your objects, or else you'll lose all your OGL data.
Use of CreateMesh() and similar things may cause copies to be made.

That being said, I very much disagree with

Take yourself outside, slap yourself around a bit, and go and read the chapter on C++ pointers, followed by the chapter on C++ references. Learn how to use them, and your problems will be greatly reduced.

As a general solution you should try to avoid pointers and references whenever possible. Only use them if you have an actual need to.
Passing and returning non-reference and non-pointer objects is absolutely fine, especially if you're using C++11.

This instance here is specific. The use of OGL handles forces you to be more cautious.

Code:
Mesh*CreateMesh(){
    returnnewMesh( vertices );
}
 
// elsewhere....
Mesh* g_mesh =CreateMesh();
 
// and when you are absolutely finished with it....
delete g_mesh;

This is the "old" way of solving these problems. Using pointers you make sure to not create any copies of your meshes.
But it's also tedious and error-prone.

Mordern C++ makes old pointers somewhat obsolete with the arrival of things like std::shared_ptr.

Code:
typedef std:.shared_ptr<Mesh> SharedMesh;
 
SharedMesh CreateMesh(){
    std::vector<Vertex> someData ...
    return std::make_shared<Mesh>( someData );
}
 
SharedMesh yourMesh;
 
void RenderFunction(){
    yourMesh = CreateMesh();
 
    yourMesh->Render();
}

shared_ptr is a useful (albeit confusing at first) tool for these kind of situations. Once you learn them it makes plain old pointers a thing of the past.

Regarding:
* facepalm *
Probably facepalm'd because of the above reasons. You put yourself at risk of creating copies of objects that contain OGL data, which makes them tedious to keep track of.

I'm running with a setup like this:
Code:
class Mesh{
    //like before, including the destructor that calls glDelete..
};
typedef std:.shared_ptr<Mesh> SharedMesh;
 
class MeshManager{
 
public:
    static SharedMesh GetMesh(int index){
        return m_meshes[index];
    }
    static int StoreMesh( std::vector<Vertex> vertices ){
        SharedMesh m = std::make_shared<Mesh>(vertices);
        m_meshes.push_back(m);
        return m_meshes.size() - 1; //Return the index of m
    }
}
 
private:
    static std::vector<SharedMesh> m_meshes;
}
 
void someFunction(){
    int meshId = MeshManager::StoreMesh( someVertexData );
 
    //If I want to access my mesh again I can do so by using meshId
    MeshManager::GetMesh(meshid)->Render();//Manually render this mesh
}

One cool bonus:
Because I still have the Mesh's destructor it'll get rid of the OGL objects for me once the static m_meshes vector is destroyed (which happens when the program exits).
So I don't need any additional cleanup functions.

Also, whats with the facepalm for using std::vector<Meshes> as a list? Is it supposed to be a pointer to the meshes rather than being on the stack?
Actually a std::vector puts it's contents onto the heap aswell, so the stack doesn't have much to do with it - which is good, because the stack is limited in size and wouldn't be able to hold megabytes of data which you could stuff into a std::vector.


Nooooooow,
that being said: I don't think it has much to do with your initial problem.

Even if you had problems with copies, that would only destroy your OpenGl buffer object, not remove the content of your vertex vector, or the value of VBO.
So, you HAVE to have a mess-up that is entirely C++-related somewhere.

One thing you can try is to change Mesh's default constructor:
Code:
//Old one:
Mesh::Mesh () {}
//new one
Mesh::Mesh(){
    VBO = 1337;
}
If the console output will now display 1337 instead of 0 you can be sure that you lost your original Mesh object somewhere along the way (which is what I'd see as the most likely cause).

If that doesn't help, maybe you could post more of your code? The erroror doesn't seem to be in the parts you've posted.
 

D.V.D

Make a wish
Reaction score
73
Wow thats a lot to take in!!! As it looks now, it does seem like the variable gets deleted because the VBO's are the same. However, Im using Visual Studio 2010 and my compiler doesn't seem to support c++11. Id like to learn it too but im stuck with a really old learn c++ in 21 days book from 1997.

I made sure the RenderScene function gets called and it does, my debug msg does get called so the fact that I changed the layout of the GLUT interface hasn't changed anything.

Now I haven't ever actually used pointers without a lot of following from another tutorial so I might not understand a lot of the basics but this is the solution I had in mind. A few changes that I have appart from the original is that all my GLUT functions are members of a GLEngine class which pretty much acts as the GLUT interface. In this class, I have a std::vector of all the meshes in my application. I got rid of the Create Mesh function and I have a AddMesh function which simply adds a mesh to the application. This looks something like this:

Code:
class GLEngine {
public:
 
    virtual void RenderScene ();
    // .... Glut Functions
 
    void AddMesh (Mesh newmesh) {
        int pos = ModelList.size();
        ModelList.resize(pos+1);
        ModelList[pos] = newmesh;
    }
 
private:
 
    std::vector<Mesh> ModelList;
 
};
 
int main (int argc, char** argv) {
    // Glut Glew stuff
 
    std::vector<Vertex> Vertices (1, Vector3f() );
    Vertices[0].pos = Vector3f (0.0f, 0.0f, 0.0f);
    Mesh Model = Mesh(vertices);
    App->AddMesh(Model);
 
    // run glut main loop whatnot
 
    }

The problem Im seeing here is that the Model variable is being recreated in the AddMesh function. So I tried to change it by making the mesh get passed by reference. That way, I don't recreate the variable in the AddMesh function. I get different values returned inside the render function now and I do get a dot rendered but if I have 3 dots with GL_TRIANGLES used as the render method, I get no image. Here's what I changed in the code:

Code:
class GLEngine {
public:
 
    void AddMesh (Mesh &newmesh) {
        int pos = ModelList.size();
        ModelList.resize(pos+1);
        ModelList[pos] = newmesh;
    }
 
private:
 
    std::vector<Mesh> ModelList;
 
};
 
int main (int argc, char** argv) {
    // Glut Glew stuff
 
    std::vector<Vertex> Vertices (1, Vector3f() );
    Vertices[0].pos = Vector3f (0.0f, 0.0f, 0.0f);
    Mesh Model = Mesh(vertices);
    Mesh &rModel = Model;
    App->AddMesh(rModel);
 
    // run glut main loop whatnot
 
    }

And my render function looks like this:

Code:
void Mesh::Render () {
    std::cout << vertices.size() << std::endl; // returns 1, previously 0
    //std::cout << VBO << std::endl; // return 1, previously 0
    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
    glDrawArrays(GL_POINTS, 0, vertices.size());
    glDisableVertexAttribArray(0);
}

This shows that the VBO does exist (has 1 buffer stored in it), and there is 1 vertex stored in the Mesh vertices vector. I looked at the order of my gl calls and it seems right, according to this tutorial: http://ogldev.atspace.co.uk/www/tutorial02/tutorial02.html . There is a dot rendered when I have my vertex ready but there isn't a triangle rendered when I have 3 vertices. I looked through some of my GL calls and nothing was showing up partially because depth test as well as cull face was enabled. Removing those together got the dot on the screen. I don't have other options enabled except GLUT_DOUBLE and GLUT_RGBA, is there a commonly used option that might interfere with my drawn triangle?
 

s3rius

Linux is only free if your time is worthless.
Reaction score
130
1997.. that means it isn't even C++03. Not a good source to learn from. It's like trying to learn English by reading books from the middle age.

As long as you don't have a destructor that calls glDeleteBuffers() you don't have to worry about making copies of Meshes, because you don't run into trouble by prematurely deleting your VBO.

So the first piece of code (without references) is fine.

(The second one actually still creates copies, even though it would probably work with a destructor just because you happen to keep both copies for a long time - and you're using references wrong anyway :D )

The render code looks fine.
I'm actually surprised that you had to turn off cull face to get it working. But it's useful to keep disabled once you start rendering triangles by hand.
And even depth test should be fine as long as you call glClear() with GL_DEPTH_BIT or whatever it was named.

What 3 vertices do you put into your vector when you try to render a triangle?
Try to "rotate" the triangle by changing the points. Maybe you've placed the triangle in a way that you don't actually see it's front but it's side - and a triangle's side is infinitely flat so you don't so anything.

Remember that, by default, the Z-axis is the one going up, the Y-axis is the one going into the distance, the X-axis is the one going from left-to-right.
So if you don't have some different Z and X-coordinates in your vertices then you'll not see anything.

By the way. Instead of doing this:
Code:
int pos = ModelList.size();
ModelList.resize(pos+1);
ModelList[pos] = newmesh;

You can do:
Code:
ModelList.push_back( newmesh );
It'll automatically increase the size of ModelList if necessary, and will just place the newmesh at the end.
 

D.V.D

Make a wish
Reaction score
73
My vertices are as follows:

Code:
    std::vector<Vertex> Vertices (3, Vector3f());
    Vertices[0].pos = Vector3f(-1.0f, -1.0f, 0.0f);
    Vertices[1].pos = Vector3f(1.0f, -1.0f, 0.0f);
    Vertices[2].pos = Vector3f(0.0f, 1.0f, 0.0f);

Well it relies on the original to still be around but it doesn't make copies via argument passes, does it? :p How would I use refrences in such a case? Is it wrong because of how I set the reference in the main function or in the AddMesh?

Wait, i thought the Z axis was depth o_O. Even so, Im following the coordinates given by the previously linked tutorial so I would assume it should still work. I don't set any kind of projection so it should be running whatever is default.

I just drew the triangle by hand, it should still take up a good portion of the screen, enough to be extremely visible at least.
 

s3rius

Linux is only free if your time is worthless.
Reaction score
130
My bad, I messed up Z and Y :oops:. How embarrassing.

Just to make sure, you're using GL_TRIANGLES, not GL_TRIANGLE, aye? I've made this mistake once and it took me an hour to find.

Regaring references:
The use of references in AddMesh is fine.

Code:
Mesh Model = Mesh(vertices);
Mesh &rModel = Model;
App->AddMesh(rModel);
That part is unnecessary.

Code:
Mesh Model = Mesh(vertices);
App->AddMesh(Model);
That's entirely enough.

You can always pass a normal variable to a function that expects to be given a reference. It's just the language's way of saying "don't make a copy of this one". You don't have to create a reference manually beforehand.

However, putting a variable into a vector will create a copy, regardless of whether you pass in a reference or not. This copy will be stored inside the vector.
So at this point you have 2 versions of the same variable sticking around.
During run-time you won't enouncter a problem, because neither variable will be destroyed.

But when your program shuts down it'll call the destructor for both copies. You'd basically glDelete every buffer twice - which isn't a problem per se, but it's a dirty solution.
Again - this is only a problem if you glDelete inside your destructor in the first place.

And this is part of why I belong to the camp of people who will tell you to avoid references and pointers unless needed.
If you have to rely on enforcing clever pass-by-reference rules for some of your objects then you'll have a fun time maintaining the code once you reach a couple of thousand lines.

(So references [or pointers] aren't that evil by themselves, but once they become mandatory for your code to work you should be careful.)
 

D.V.D

Make a wish
Reaction score
73
I tried both, GL_TRIANGLE gives me a undefined error so I always use GL_TRIANGLES which doesn't show anything up :( Really weird since GL_POINTS works well I think. It always creates the point in the center hinting at maybe a distance issue? Is there a way to find out my default view frustum or whatever OpenGL uses by default? I put in these vectors and each one gives me a point in the center of the screen ( seems like this is the problem causing me all the non drawn triangles ).

Code:
Vertices[0].pos = Vector3f(0.4f, 0.0f, 0.0f);
Vertices[0].pos = Vector3f(0.8f, 0.0f, 0.0f);
Vertices[0].pos = Vector3f(0.8f, 0.2f, 0.0f);

Alright, I think I understand whats happening a bit better then. So if I don't have glDeleteBuffers in my destructor, I won't have a leaking problem with my buffer on the GPU? Will it be destroyed or was it not sent in the first place?

Yeah this pointer/reference business is causing a headache but its a pretty cool concept. Im thinking of just having a separate function that clears the buffers with glDeleteBuffers, but I'm not sure whether the data is on the GPU which would cause a leak or inaccessible buffer stored on the GPU.
 

s3rius

Linux is only free if your time is worthless.
Reaction score
130
In a nut shell:
You have two copies of the same Mesh, both hold (and use) the same Buffer-handle.
Their destructor is telling them to delete the buffer that the handle is pointing to.

Once one of your two copies is destroyed it'll then delete the buffer, without knowing that there is another Mesh out there that is still using this buffer.
So the other mesh now references a buffer that doesn't exist anymore.
That's not a problem per se, because OpenGL will just ignore your commands if you give it a bad buffer. But of course it's not what you want.

So the easiest solution is to simply not use a destructor that calls glDeleteX. This way your meshes won't delete any data that might still be needed.
(Good programming style suggests that you make sure to call glDeleteX for every buffer at some point, generally at the very end of the program execution, in some sort of cleanup() function. But theoretically you don't have to, because OpenGL automatically cleans up after you.)

Regarding the triangle:
When you use GL_POINTS, does the point always appear in the middle of the screen no matter what values you have put into the Vector3f?
And are you using shaders?
 

D.V.D

Make a wish
Reaction score
73
Okay, well I'll have specific functions that load the buffer onto the GPU (bind the buffer) and delete it because not every mesh that I create should be loaded onto the GPU. Some might simply be loaded because a certain unit has that mesh but if the unit is occluded, then it has no buisness being there.

No Im not using shaders and I think the problem stems from me borrowing code from the linked tutorial and in that code, they have all the conversions from world pos to camera pos and what not. I think because of this, the triangle doesn't get drawn because its smaller than a pixel whereas the points will get drawn because the point will always be one pixel regardless of its position or how close it is to the camera. How to solve this Im not sure :p Is there a way of setting the bounding volume that gets drawn to the screen?

Heres the GLUT code that gets called:

Code:
void GLUTBackendInit (int argc, char** argv) {
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
}
 
void GLUTBackendRun (Callbacks* callbacks) {
    if (!callbacks) {
        fprintf(stderr, "%s : callbacks not specified!\n", __FUNCTION__);
        return;
    }
 
    glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
    //glFrontFace(GL_CW);
    //glCullFace(GL_BACK);
    //glEnable(GL_CULL_FACE);
    //glEnable(GL_DEPTH_TEST);
    pCallbacks = callbacks;
    InitCallbacks();
    glutMainLoop();
}
 
bool GLUTBackendCreateWindow(unsigned int Width, unsigned int Height, unsigned int bpp, bool isFullScreen, const char* pTitle)
{
    if (isFullScreen) {
        char ModeString[64] = { 0 };
        snprintf(ModeString, sizeof(ModeString), "%dx%d@%d", Width, Height, bpp);
        glutGameModeString(ModeString);
        glutEnterGameMode();
    }
    else {
        glutInitWindowSize(Width, Height);
        glutCreateWindow(pTitle);
    }
 
    // Must be done after glut is initialized!
    GLenum res = glewInit();
    if (res != GLEW_OK) {
        fprintf(stderr, "Error: '%s'\n", glewGetErrorString(res));
        return false;
    }
 
    return true;
}
 
int main (int argc, char** argv) {
    GLUTBackendInit(argc, argv);
 
    if ( !GLUTBackendCreateWindow(WINDOW_X, WINDOW_Y, 32, WINDOW_IS_FULLSCREEN, WINDOW_TITLE) ) {
        return 1;
    }
 
    GLEngine* App = new GLEngine();
 
    if ( !App->Init() ) {
        return 1;
    }
 
    std::vector<Vertex> Vertices (1, Vector3f());
    //Vertices[0].pos = Vector3f(-1.0f, -1.0f, 0.0f);
    //Vertices[1].pos = Vector3f(1.0f, -1.0f, 0.0f);
    //Vertices[2].pos = Vector3f(0.0f, 1.0f, 0.0f);
    Vertices[0].pos = Vector3f(0.8f, 0.2f, 0.0f);
    Mesh Model = Mesh(Vertices);
 
    App->World.CreateUnit(Model);
 
    //cout << App->ModelList[0].getvbo() << endl; // 1
    //cout << App->ModelList[0].vertices.size() << endl; // returns proper number of vertices
    App->Run();
    delete App;
 
    return 0;
}

Note: The commented out vertices make the triangle I was trying to draw. The commented out cout functions simply tested if the VBO was the wanted value.

Callbacks is simply a class with virtual methods for each of the special GLUT functions (keyboard, Mouse, Render, Idle, etc.). They aren't filled with data in that class but are in the GLEngine class which on Init calls GLUTBackendRun.

Tbh I don't see anything different compared to what the default code is for a glut program, since glutEnterGameMode is never called because I have fullscreen set to false.
 

s3rius

Linux is only free if your time is worthless.
Reaction score
130
I think by default the screen goes from -1 to 1 in height and length.

So *if* that's the case (GLUT could change it, some part of your code could change it, OpenGL could be different than I remember) that might be a reason.

By can you try displaying a point with GL_POINTS and try to move the point around? Does it always appear at the same spot (= middle of screen) or does it change position?

My current guess is that you're running a modern OGL context (3.0+) which would mean that you kinda need a pair of shaders to display correctly. Without the shaders everything would be placed in the (0,0) position. And a triangle with all 3 points being (0,0) wouldn't show up, of course.

If you can move the point around, then that'd refute my guess.
 

D.V.D

Make a wish
Reaction score
73
Yea Im running 3.0 and I don't have any shaders. Ill implement that and change the constructor/destructor to not use the glbuffers and have other functions which do that for you so that its more controlled. Sorry for the late reply, school came back and I had to pick up my math marks so I haven't had the time to implement it yet :( Ill update here if anything. Also, just a question about the memory layout, is the stack with locals and what not so small because its the L# cache of a CPU? And then the heap would be the main ram?
 

s3rius

Linux is only free if your time is worthless.
Reaction score
130
http://static.duartes.org/img/blogPosts/linuxFlexibleAddressSpaceLayout.png

If you imagine RAM as actual space it'd look a bit like this.
The lower bottom are all static data, which has a certain size.
The program has a certain amount of free memory for the stack. After that free space is used up it'd bump against the Memory Mapping Segment.The stack has to be one solid chunk (it can't be broken up into pieces).
So that's where the stack limit comes from. The actual amount (how many MB) is completely arbitrary.

The heap isn't limited by the one-chunk restriction. So if the heap grows too large, the operating system is allowed to grab more free space from anywhere else. That's indicated by the "program break" notation on the right side.

However, heap and stack reside in RAM. They're just different regions. Both can be loaded into the L caches of the CPU, if necessary.
 

D.V.D

Make a wish
Reaction score
73
Oh okay, that explains the memory constraint. So out of curiosity, if I have for example 3 ram sticks and 2 ram sticks are fast while the last one is slower, will it actually cause slow downs rather than speed ups to have the slower ram stick connected to my system?? If by chance, the stack gets put on the slower ram stick, wouldn't that cause a major issue and remove most the of advantage you have of having other faster ram?

Also, how do you judge what needs to be on different levels of cache? Is it something that gets repeated in large repetition? For example, vertex processing? I noticed DOOM 3 source code has a vertex cache file but I don't know too much about the usage of cache other than it being extremely fast. Also a cool fact, Starcraft 2 will only use 2 cores but it will use all available cache on the system so I'm assuming that would be the cache that isn't core specific.

Alright, so after making the gl buffer functions run in separate functions rather than in constructors/destructors, my code works!! I don't require any shaders, the triangle comes up white so it does show that my previous code wasn't working because of buffer/pointer mismanagement.

What Im wondering however is how actual applications load in their models? People were suprised when I made one mesh equal another but wouldn't you technically have to do that when you load models? For example:

Code:
struct Mesh {
    ...
};
 
std::vector<Mesh> AllMeshes;
 
void AddMesh (string filename) {
    Mesh newMesh;
    AllMeshes.pushback(newMesh.LoadMesh(filename);
}

Wouldn't you always have to do something similar to load a engine? Would that require a = operator or could you just use the default one?
 

s3rius

Linux is only free if your time is worthless.
Reaction score
130
Regarding RAM:
First of all using RAM with different timings probably isn't such a good idea. It's very unusual and God knows what could happen.
But usually the faster RAM will just be slowed down to match the slower one. The hardware tries to abstract away the fact that you have different kinds of RAM.

Regarding cache:
Usually every CPU has its own cache. A CPU can only work on the data inside its cache. On the other hand your program and all data is stored in RAM.

During program execution the memory parts that your CPU needs will be copied into the cache; then the CPU can work with them and write the results back into the cache; afterwards the cache contents will be copied back into RAM.

Cache is basically a very small, but very fast RAM.
It's been added because the normal RAM is simply not fast enough to work well, but you can't make your entire main memory out of cache-RAM because it's way too expensive, eats way too much power and creates way too much heat for a normal computer.

StarCraft II can only use the cache of the CPUs it's using. So just because you have 4 cores doesn't mean SCII can "steal away" the remaining 2 cores' cache.

What they probably meant was:
1) They make efficient use of the cache that's available to them. While programmers have no control over the cache directly, often times you can write code in a way that's more cache-friendly to get some performance boost.

or 2) They use other caching systems. The hardware CPU cache isn't the only thing that's called "cache".
"Caching" usually means putting data inside fast storage to improve access times.
You cache data by loading parts of RAM into the L# caches (as explained above).
Or you cache data by loading parts of your hard drive into the RAM (for example textures or models, probably what you were reading about Doom3).
Or you cache data of websites into your RAM or hard drive so you don't have to download them again (your browser has this kind of cache).
All these techniques (and more) can legitimately be called caching.

The L# caches have slightly different use:
L1 cache is what I explained above. Very small, but extremely fast. The only cache that the CPU can access.
L2 cache is a slightly slower, but larger cache, it's used as a kind of buffer between RAM and L1.
L3 cache is mainly used to ensure coherency between cores. If 2 CPUs want the same data then you have a problem, because both will want to copy the data into their own L1 cache, so neither knows what the other core is doing to the data. L3 is helping in that regard, but it's a pretty complicated process.

Regarding OpenGL:

The reason why people were surprised when you used = is:

1) If your mesh manages the life of it's OGL buffers (destructors with glDelete and stuff like that) then you run into a lot of trouble, as explained in all the posts above :)
2) You really only care about the VBO variable from your mesh (the only thing you need to call glDrawX). Afterall you don't need the vector<vertex> anymore, because all the vertex data has been copied into the GPU memory at this point (you might need it once you're doing stuff like accurate collision detection). So you're copying around useless data.

A "common" way would be to use pointers:
Code:
class ModelInstance {
    Mesh* myMesh;
    Vector3f modelPosition;
    Animation currentAnimation;
    ....
 
public:
    ModelInstance (Mesh* meshPointer){
        myMesh = meshPointer;
    }
 
    void Draw(){
        myMesh->Render(modelPosition, currentAnimation); //use pointer to access mesh
    }
}
 
vector<Mesh> allMyMeshes; //all your meshes :)
 
void main(){
    ModelInstance model( &allMyMeshes[4] ); //get a pointer to allMyMeshes[4]
 
    model.Draw(); //Draw the model
}
You seperately load all your Meshes into a vector, then get a pointer to a certain Mesh when you create a ModelInstance.
Upon drawing you then just pass the Mesh all information it needs to draw itself to the specified position.
But raw pointers are bad. They are a pain to deal with, and it might lead to very strange errors.
For example, when you add more Meshes to your vector, it could happen that the internal storage becomes too small to hold them all. The vector automatically resizes itself. But that might cause all Meshes inside the vector to move to a different location in memory.
You don't notice that change, but now all your Mesh pointers point to a wrong memory location.. not good..

But you can simply change to an array index:

Code:
vector<Mesh> allMyMeshes; //all your meshes :)
 
class ModelInstance {
    int myMeshIndex;
    Vector3f modelPosition;
    Animation currentAnimation;
    ....
 
public:
    ModelInstance (int meshIndex){
        myMeshIndex = meshIndex;
    }
 
    void Draw(){
        allMyMeshes[myMeshIndex].Render(modelPosition, currentAnimation);
    }
}
 
void main(){
    ModelInstance model( 4 ); //want to use the 5th model stored in our vector
 
    model.Draw(); //Draw the model
}

You can see another solution in post #4 in this thread, using std::shared_ptr and a ModelManager (most applications use something like a model manager to load/delete/access stuff).

For more convenience you can extend the use of your ModelManager.
In my application I can use models like this:

Code:
class Actor{
    MeshHandle mesh; //<- this is basically the array index
    Vector3f position;
 
public:
    void Load(std::string name){
        //GetMeshHandle() returns the array index of the model with the given name.
        //if it couldn't be found (=> not loaded yet) it'll try to find it on the hard drive and load it.
        mesh = ResourceManager::GetMeshHandle(name);
 
        //RenderingQueue is a static class which calls the Draw() function of all added Actors
        //every frame. This way I don't have to manually write x.Draw() all the time.
        RenderingQueue::Add(this);
    }
 
    //This function will only be called by the RenderingQueue.
    void Draw(){
        ResourceManager::GetMesh(mesh).Draw(position);
    }
 
    ~Actor(){
        //Stop rendering this Actor.
        RenderingQueue::Remove(this);
    }
}
 
class Unit{
    Actor person;
    Actor healthBar;
    Actor gun;
 
    //No need for a Draw() function, because all Actors do it via the RenderQueue.
 
    Unit(){
        //On construction I load all the models I want
        person.Load("data/models/someModel.mdl"); //every Actor handles loading by itself
        healthBar.Load("data/models/someBar.mdl");
        ....
        //and when the Unit object is destroyed it'll get cleaned up automagically!
    }
}
 
Unit playerUnit; //As long as this object exists, it'll be drawn to the screen.
 
void someFunction(){
    playerUnit.moveTo(some_location);
}

Loading and managing resources like models can become very complicated:

-> What happens if you can't find the model that you were asked to load?
-> What happens when you run out of RAM and have to get rid of some models?
-> Large models might be very time intensive to load, if you load them on-the-fly you have to prevent lags.
-> Modern games often have models of different detail grade. A models detail grade goes down when it is far away from the camera, to save processing power.
-> etc etc
 

D.V.D

Make a wish
Reaction score
73
Ouch Ill remove the extra ram stick then. I thought the extra 1gig of ram would benefit but it seems like its actually slowing it down D:

Hmm so no matter what, every piece of code at one point finds its way to L1 cache to get processed and then gets replaced by the next piece of code?

Oh I see, well from what I remember and what your telling me, it probably used all of the L3 cache but not the other levels cache like L1 since that one is core specific.

Yeah I heard a bit about the cache friendly and cache misses stuff. Its one of the things I wanna check out after I get better at programming and learn the basics to graphics engines.

Oh I see, that can spring up a lot of confusion :p

Yeah it seems like a lot of synchronization is required between the caches and getting timings as perfect as possible so you don't have stalls.

Hmm, with the model manager, can't you have it be a list of all meshes loaded into the RAM and then check the amount of space you have on your RAM. Then if your loading in a new model, in the model file, you have a variable that tells you its size and you check if there is enough space to load it, if not then based on importance you delete or don't load the mesh, am I correct? Only problem is knowing how much extra space is left on your ram, is there a windows library or someway that I can find such variables? They seem like it would be from a OS library.

EDIT: http://msdn.microsoft.com/en-us/library/aa366589(v=vs.85).aspx

It seems to be there so I found it, now I just gotta sort out whats the uses of virtual, paging and physical memory or if I can group all of it into a single block of memory and say thats how much is available for the model.

Your last example seemed more like something found in a actual game engine :p If I understand it correctly, you have a class which handles the list of models you have loaded onto ram and then you have a class that renders them every frame. Wouldn't that be similar to my method of having the GLEngine class handle the rendering of all my objects even if its at the most basic possible level, without the resource manager?

For your second point, don't games do model/texture streaming where they stream in only the data they need from the harddrive onto the RAM and then update the info on the RAM as more of the model is being shown? I always thought this was a cool strategy but very latency heavy however Im not sure how much that even matters when you stream stuff from the harddrive to RAM.

Also, don't games load data before its actually visible? I know some games suffer intense load times because they don't do this or do it poorly like the new Lego city undercover game for Wii U.

The different models for different details is a bit of a cheesy hack around not enough performance and RAM on a system :p If I make a game, Id rather keep it simpler than add very visible model swapping. Takes away from the experience in my personal opinion :p

I couldn't find a good answer from this question and my dad said no but I recall a really long time ago, there was a game that accidentally deleted everything on your hard drive. Is that possible to happen with pointers or can pointer only potentially delete all memory in the RAM?

EDIT: If you read about my LNK2019 error, fixed it. The error was me defining the inlined operator= in the cpp file instead of the header.

Btw thats for being so helpful and giving easy to understand replies on every one of my questions!!
 

s3rius

Linux is only free if your time is worthless.
Reaction score
130
Every bit of code and data which is used will eventually end up in the cache for a while. There is actually a seperate cache for code (the instruction cache), but it still works the same.

Regaring memory size management:
You can't just check how much memory is left available. Your program could be memory restricted, heap fragmentation could make new memory allocation impossible, your operating system doesn't like you, etc.
So even if you have 100 MB memory left, there's no guarantee that your operating system will allow your program the allocation of another 50 MB.
And the graphics card uses a seperate memory (the VRAM) altogether.

But yes, once your memory is full you got to start deleting resources based on some priority. Usually you'll try to delete models that won't be in the near future (= the next few seconds / minutes).

In short:
Physical memory is your actual hardware memory amount.
Paging file is a file that the computer may use it to store parts of your RAM on hard disk.
Virtual memory is "simulated" memory that is unique to every process (if I remember correctly). Virtual memory is what programs work with but as you add data to the virtual memory, the physical memory will fill up too.

Regaring resource management:
Yes, your approach is good with the exception of the glDeleteX stuff we've been talking about.

There are a lot of games that do data streaming, for example all the open-world titles (Crysis, Far Cry, the new Tomb Raider) but not all do. StarCraft II doesn't do dedicated data streaming, for example. At least I'd be very surprised if it did.

And some games also load data before it's visible. Sometimes that isn't all that possible, however. Take Warcraft III for example. The first time you spawn a unit via Triggers, the game will lag. In SCII that doesn't happen, so they've found a way around that.

And yes, the low-detail models are a hack to not having enough performance. But you simply couldn't play a game like FarCry3 if you didn't do it. Computers don't have enough power to render everything in full detail. That works in closed rooms, but not in an open space.

But for a small and simple game you don't need things like this, of course.
You don't need to be careful with memory either. Who cares if your program eats 300 MB. You have tons left anyway.

Regarding deleting your hard drive:
On Windows? You can't even delete your entire RAM with pointers even if you wanted to. You'd get an access violation and your program simply crashes. Hard drive? No chance.

But of course it's easy to write a program that intentionally deletes all (or most) of your hard drive.
 
General chit-chat
Help Users
  • No one is chatting at the moment.

      The Helper Discord

      Members online

      No members online now.

      Affiliates

      Hive Workshop NUON Dome World Editor Tutorials

      Network Sponsors

      Apex Steel Pipe - Buys and sells Steel Pipe.
      Top