When we use the cRecordset type to create a recordset in SQLite, how efficiently does it manage the memory?
Does it load the entire recordset into the RS object upon the OpenRecordset statement?
Or does it load just parts of it, and then as navigation statements are executed (MoveNext, Move Last, MovePrev, MoveFirst, etc.) some more records are loaded into memory and older records are disposed?
Let's say we have the following snippet of code:
Code:
Dim C as Currency
Dim Rs As cRecordset
sql = "select * from T1"
Set Rs = Cnn.OpenRecordset(sql)
Do Until Rs.EOF
C = C + Rs.Fields("A")
Rs.MoveNext
Loop
My opinion is that the right way of doing it is to load a certain number of records from the beginning of the table as well as a certain number of records from the end of the table into memory when OpenRecordset statement is executed, so that for a reasonable number of navigations (MoveNext, Move Last, MovePrev, MoveFirst, etc.) the required records that we are trying to navigate to, are already in memory, and if the program proceeds to navigate a lot (and surpasses the existing records in memory) new records should be loaded into memory from the database and older records (statistically least used records) should be discarded (memory allocated to them released to the operating system) and this kind of chunk-wise MANAGING of records should continue until the recordset is set to nothing in which case the whole records should be disposed of and all the memory allocated to them released to the operating system.
Please correct me if I am wrong: The above description is what I think is the best management of records in memory, but I am not sure if this is the best or not, and I am not sure if this is the way that SQLite actually manages recordsets memory.
The reason why this is an issue is that I believe if the table is gigantic with millions of records, loading the entire table into memory is not wise because it will use all of the computer's memory.
This kind of exceeding the computer's memory will eventually happen if the table is more and more gigantic.
What if the table has a billion records?
What if the table has 100 billion records?
I don't think it is right to load the entire table into memory, but I am not sure about that.
And I am not sure how SQLite actually does it.
Can you please clarify this issue?
Thanks.