Remove/increase the record size limit#7332
Conversation
| }; | ||
|
|
||
| const ULONG MAX_RECORD_SIZE = 65535; | ||
| const ULONG MAX_RECORD_SIZE = 1000000; // just to protect from misuse |
There was a problem hiding this comment.
Would not 1048576 (1MB) be more easy to document/explain?
There was a problem hiding this comment.
Agreed. But my primary worry is whether we can foresee any other problems with this change. Increased tempspace usage is bad, but this is just a performance issue (those using very long records should remember about that). Longer records will also cause bigger memory usage. For very complex queries (those near the 255 contexts limit), if we imagine that e.g. every second stream has its rpb_record, then max memory usage per query (worst case) increases from 8MB to 128MB. With many compiled statement being cached this may become a problem, although in practice we shouldn't expect all tables to be that wide. Or we should release rpb's records of cached requests when their use count goes to zero. Any other issue you can think of?
There was a problem hiding this comment.
Although the memory usage issue is not only about user statements but also about procedures/functions/triggers that are also cached. Maybe EXE_unwind() should delete all rpb_record's after closing the rsb's and releasing the local tables? Or should it be done by RecordStream::invalidateRecords()?
|
As Firebird is used in LibreOffice i suppose that 10MB is more rational change for them. |
|
BTW, is there the same sanity check for result set record size or it is completely unlimited? |
|
Unlimited. |
|
The patch passed all the CI tests successfully (except explicit checks for the max record size). The sorting module switches to the "refetch" mode while processing long records, so the memory consumption remains low. Hash joins are slightly affected from the memory consumption POV, but the effect is limited only by the right part of the join which usually has low cardinality. Merge joins may be more affected but this just means switching to the temp files earlier, the maximum memory usage is still restricted by the I still suppose that it makes sense to release |
|
@dyemanov To clarify, with this change, a table can have rows of 1 MiB, and a result set has no limit (was that already the case in earlier versions for result sets, or is that also new in Firebird 6?) |
Correct. Unlimited result set was available since 3.0, IIRC. |
This addresses ticket #1130. After compression improvements the storage overhead this is no longer an issue. I think we should still preserve some safety limit, e.g. 1MB. This change suggests some other improvements too, like compression of the stored temporary records (sorts, record buffers), but they may be addressed separately.