Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send small blobs inline. #8318

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from
Draft

Send small blobs inline. #8318

wants to merge 5 commits into from

Conversation

hvlad
Copy link
Member

@hvlad hvlad commented Nov 14, 2024

The feature allows to send small blob contents in the same data stream as main resultset.
This lowers number of roundtrips required to get blob data and significantly improves performance on high latency networks.

The blob metadata and data is send using new type of packet op_inline_blob and new structure P_INLINE_BLOB.
The op_inline_blob packet is send before corresponding op_sql_response (in case of answer on op_execute2 or op_exec_immediate2), or op_fetch_response (answer on op_fetch).
There could be as much op_inline_blob packets as number of blob fields in output format.
NULL blobs and too big blobs are not sent.
The blob send as a whole, i.e. current implementation doesn't support sending of part of blob. The reasons - attempt to not over-complicate the code and the fact that seek is not implemented for segmented blobs.

Current, initial, implementation send all blobs that total size is not greater than 16KB.

The open questions is what API changes is required to allow user to customize this process:

  • allow to enable and disable inline blob sending
  • allow to set inline blob size limit
  • decide on what level should be applicable settings above: per-attachment, per-statement, etc
  • decide default and maximum values for inline blob size limit.

Also, will good to have but not required:

  • allow to set BPB in advance
  • allow to enable blob in-lining on per-field basis, if output format contains many blob fields.

This PR currently in draft state and published for the early testers and commenters.

@hvlad hvlad self-assigned this Nov 14, 2024
@hvlad hvlad marked this pull request as draft November 14, 2024 11:58
@aafemt
Copy link
Contributor

aafemt commented Nov 14, 2024

Why a new packet instead of sending them in the response message itself? IIRC response packets contain its own format so inline BLOBs can be described individually as strings and then transformed to cached BLOBs on client.

@AlexPeshkoff
Copy link
Member

Vlad, I suppose that content of op_inline_blob is cached by remote provider in order to serve requests for data in that blobs w/o network access. If yes - how long is data in that cache kept?

@hvlad
Copy link
Member Author

hvlad commented Nov 14, 2024

Vlad, I suppose that content of op_inline_blob is cached by remote provider in order to serve requests for data in that blobs w/o network access. If yes - how long is data in that cache kept?

Yes, sure. Cached blob is bound to the transaction object and will be released (what happens first):

  • at transaction end, or
  • when user opens the blob with non-empty BPB, or
  • user opens the blob with empty BPB and then closed it.

Note, in the case when user opens the blob with non-empty BPB, cached blob is discarded.

@AlexPeshkoff
Copy link
Member

Imagine RO-RC transaction which leasts VERY long (nothing prevents from keeping it open for client application lifetime). Would not such long life of cache be an overhead?

@hvlad
Copy link
Member Author

hvlad commented Nov 14, 2024

Imagine RO-RC transaction which leasts VERY long (nothing prevents from keeping it open for client application lifetime). Would not such long life of cache be an overhead?

It is supposed that cached blobs will be read by application.
Anyway, it will be good to have a way to set limit on blobs cache size, is it your point ?

@AlexPeshkoff
Copy link
Member

Telling true my first thought was that cache is very tiny - just blobs from last fetched row, but this appears inefficient when we try to support various grids.

First of all let's think about binding cache not to transaction but to request/statement. It's hardly typical to close statement and read blobs from it after close. Moreover, in the worst case that will anyway work - in old way over the wire.

With limiting cache size arrives one more tunable parameter and I'm afraid there are already too much of them: blob size limit per-attachment or per-statement, may be on per-field basis (at least on/off), default BPB, may be on per-field basis too. (Hmm - are there too many cases when >1 blob per row is returned ?)

Last but not least - is blob's inlining enabled by default? On my mind yes, but very reasonable (ie not too big) defaults should be used.

@sim1984
Copy link

sim1984 commented Nov 14, 2024

There should be cache size limits in any case. If you loaded 1000000 records (1 blob per record) at 16K, that's already 16G. But if I understand correctly, this will be provided that the user does not read these cached blobs as the records are fetched. Maybe it's worth limiting the blob cache to some amount, for example 1000 (configurable) and when the number of blobs becomes greater than this value, the oldest of them are removed from the cache.

And of course, this should be disabled/enabled at the statement level. And perhaps some dpb to set the default parameter.

@hvlad
Copy link
Member Author

hvlad commented Nov 14, 2024

Telling true my first thought was that cache is very tiny - just blobs from last fetched row, but this appears inefficient when we try to support various grids.

Yes, it was my thoughts too. Also, consider batch fetching, when whole batch of rows should be read from the wire - it will cache all corresponding blobs anyway.

First of all let's think about binding cache not to transaction but to request/statement. It's hardly typical to close statement and read blobs from it after close. Moreover, in the worst case that will anyway work - in old way over the wire.

It was in my very first version of code. Until I started to handle op_exec_immediate2 - it have no statement :)

It is possible to mark blobs by stmt id (when possible) and remove such blobs from transaction cache on statement close.
But I prefer to avoid it, so far. It gives a chance for the "not typical" apps to access cached blobs after statement close - I guess is it not so non-typical when there is no cursor, i.e. for 'EXECUTE PROCEDURE', etc.

With limiting cache size arrives one more tunable parameter and I'm afraid there are already too much of them: blob size limit per-attachment or per-statement, may be on per-field basis (at least on/off), default BPB, may be on per-field basis too. (Hmm - are there too many cases when >1 blob per row is returned ?)

If there will be too many parameters, we can put them into separate dedicated interface, say IClientBlobCache, that will be implemented by Remote provider only.

And I'm sure there is applications that have many blobs in its resultsets. Look at monitoring tables, for example: MON$STATEMENTS have two blobs, there are other.

Last but not least - is blob's inlining enabled by default? On my mind yes, but very reasonable (ie not too big) defaults should be used.

Currently it is enabled - else nobody could be able to test the feature ;)

One of the goals of this PR is to discuss and then implement necessary numbers of parameters and corresponding API to customize the blobs cache.

So far, I see two really required parameters: 'maximum blob size for inline sending' (per-statement or per-attachment- to be decided, it should be known to the server) and 'size of blob cache' (per-attachment, client-only). Others is 'good to have' but not highly required : BPB, per-field inlining.

@hvlad
Copy link
Member Author

hvlad commented Nov 15, 2024

The builds for testing could be found here:
https://github.com/FirebirdSQL/firebird/actions/runs/11836803458
Scroll page down to the 'Artifacts' section

src/remote/remote.h Outdated Show resolved Hide resolved
src/remote/remote.h Outdated Show resolved Hide resolved
src/remote/remote.h Outdated Show resolved Hide resolved
@sim1984
Copy link

sim1984 commented Nov 18, 2024

I tried to conduct experiments on a local network. There are no problems with latency there, however, I will provide some results of the experiment.

Run the query in different variants

select
  remark
from horse
where remark is not null

It contains 66794 small BLOBs.

Run IBExpert with this query and do FetchAll

Results Firebird-5.0.2.1567-0-9fbd574-windows-x64 (server + client):
640ms Memory consumption 38 MB (IBExpert)

Results Firebird-6.0.0.526-0-Initial-windows-x64 (server + client):
1s 187ms Memory consumption 385 MB (IBExpert)

Probably there will be a gain in networks with high latency. I will try to see in the near future. In the meantime, the experiment shows that default blob prefetching is not always useful and at least consumes more memory.

PS

select
  sum(octet_length(remark)) as len
from horse
where remark is not null
LEN
=========
6 558 101

Overhead seems quite large to save 6 MB.

Am I right in understanding that 16K of memory is always allocated for each BLOB? I also don't know how exactly BLOBs are handled in IBExpert, perhaps it doesn't close a fully read BLOB until the end of the query/transaction. What about the limitation of storing the last N BLOBs in the cache?

@AlexPeshkoff
Copy link
Member

AlexPeshkoff commented Nov 18, 2024 via email

@hvlad
Copy link
Member Author

hvlad commented Nov 18, 2024

Am I right in understanding that 16K of memory is always allocated for each BLOB?

Yes, and it was not introduced by this PR.

BTW, 66794 blobs should consume near 1GB, while you see about 350MB - what memory counter you look at ?
I tried with 67000 of blobs of 1024 bytes and see about 1.4GB increase of 'Private Bytes' and about 1.1GB increase of 'Virtual Memory' (it was with DEBUG build).

I also don't know how exactly BLOBs are handled in IBExpert, perhaps it doesn't close a fully read BLOB until the end of the query/transaction.

I doubt IBE reads any blob contents when shows data in grid - until user explicitly ask for it by moving mouse cursor over grid cell or by pressing '...' button in the cell. And debugger confirms it.

What about the limitation of storing the last N BLOBs in the cache?

It was proposed but we still have not defined what settings and API to manage them we need.

Thanks for testing !

@hvlad
Copy link
Member Author

hvlad commented Nov 18, 2024

@AlexPeshkoff : I think the time overhead is related with memory allocations.

@sim1984
Copy link

sim1984 commented Nov 18, 2024

I just looked at the task manager. It is clear that it does not display memory quite correctly, but here the difference is visible to the naked eye. And I have no claims to performance, I understand that slightly different conditions need to be tested (primarily in networks with high latency). Nevertheless, I consider this test useful to understand that without proper settings we can get excessive memory consumption at least.

@hvlad
Copy link
Member Author

hvlad commented Nov 19, 2024

As there is no better ideas, I offer following API changes

interface Statement : ReferenceCounted
{
...
version:	// 6.0
	// Inline blob transfer
	uint getMaxInlineBlobSize(Status status);
	void setMaxInlineBlobSize(Status status, uint size);
}
interface Attachment : ReferenceCounted
{
...

version:	// 6.0
	// Blob caching by client
	uint getBlobCacheSize(Status status);
	void setBlobCacheSize(Status status, uint size);

	// Inline blob transfer
	uint getMaxInlineBlobSize(Status status);
	void setMaxInlineBlobSize(Status status, uint size);
}

@AlexPeshkoff
Copy link
Member

AlexPeshkoff commented Nov 19, 2024 via email

@aafemt
Copy link
Contributor

aafemt commented Nov 19, 2024

I see no need for new methods in IAttachment, it can be handled by backward-compatible way using DPB and info items unless someone want to make such adjustments dynamically during attachment lifetime.

@sim1984
Copy link

sim1984 commented Nov 19, 2024

I see no need for new methods in IAttachment, it can be handled by backward-compatible way using DPB and info items unless someone want to make such adjustments dynamically during attachment lifetime.

The presence of methods in IAttachment does not cancel the need for dpb tags to initially set these parameters when connecting. And yes, since the cache itself is for each transaction, it makes sense to change these parameters during the connection. If I understand correctly, the value from setBlobCacheSize is passed to the transaction at startup, and IAttachment::setMaxInlineBlobSize is used during IAttachment::execute and IAttachment::OpenCursor, and passes the default value to IStatement when calling IAttachment::prepare.

@hvlad
Copy link
Member Author

hvlad commented Nov 23, 2024

New methods was added:

interface Attachment
...
	// Blob caching by client
	uint getMaxBlobCacheSize(Status status);
	void setMaxBlobCacheSize(Status status, uint size);

	// Inline blob transfer
	uint getMaxInlineBlobSize(Status status);
	void setMaxInlineBlobSize(Status status, uint size);
...
interface Statement
...
	// Inline blob transfer
	uint getMaxInlineBlobSize(Status status);
	void setMaxInlineBlobSize(Status status, uint size);

New DPB and info items will be added later, after interface changes above stabilized finally.

Common behaviour

All methods above is implemented by both Remote and Engine providers.

Engine provider set isc_wish_list error in status and returns zero, when appropriate.
Remote provider check protocol version and get/set internal object data (no network roundtrip) or set isc_wish_list error in status and returns zero, when appropriate.

Inline blob size

Attachment::setMaxInlineBlobSize() set default value for inline blob size. This value is used by Attachment::execute() and Attachment::openCursor().

Also, this value is assigned to the new Statement instance created by Attachment::prepare(). It can be changed for given statement using Statement::setMaxInlineBlobSize() but it should be done before call of Statement::execute() or Statement::openCursor().

Default value for inline blob size value is 16KB. To disable inline blob transfer, set inline blob size to zero.

Currently, maximum value of inline blob size is not limited. It is open for discussion if some limit should be introduced or not and what value to choose. Obvious value of maximum possible segment size (64KB-2, or 65534 bytes) could be recommended for cursors (many blobs to cache). But in case of single row resultset it is not so obvious. Protocol is not limited by 2-bytes length, if I'm not mistaken.

The value of inline blob size is transferred within op_execute, op_execute2 and op_exec_immediate2 packets for supported protocol versions only.

Client blobs caching

The content of inline blobs is cached on the Attachment level. The blob is removed from cache after application uses it (opened and then closed) or if application opened same blob using custom BPB.

The size of client cache of blobs is limited. Default size is 10MB and can be changed using Attachment::getMaxBlobCacheSize(). There is no upper or lower limit for this value. The limit change is not applied immediately, i.e. if new limit is less than currently used size - nothing happens. If blob cache have no space for new inlined blob - such blob is discarded silently.

Note, currently per-blob buffer is pre-allocated and its size is 16KB. It means that smaller blobs requires no additional memory re-allocations but occupy 16KB in memory (and in blobs cache) despite of its real size. I considering changes in this regards.

@hvlad
Copy link
Member Author

hvlad commented Nov 23, 2024

The updated builds for testing could be found here:
https://github.com/FirebirdSQL/firebird/actions/runs/11986926791
Scroll page down to the 'Artifacts' section

src/remote/server/server.cpp Show resolved Hide resolved
src/remote/server/server.cpp Show resolved Hide resolved
src/remote/client/interface.cpp Show resolved Hide resolved
@sim1984
Copy link

sim1984 commented Nov 25, 2024

I have compiled a small application and tested Firebird 5.0 and 6.0 with different fbclient (parameters were not changed). The following query was tested:

select
  code_horse,
  remark
from horse
where remark is not null

where remark - BLOB SUB_TYPE TEXT.

Record count: 66794

Each test executed the query twice, in the first case the blob itself was not read (only its identifier), in the second case the BLOB was read entirely and closed and the total length of all BLOBs was calculated. In order for the compiler not to optimize the empty loop in the first case, I calculated the code_horse sum.

Here are the results I got:

==== Blob inline test ====

Firebird server version
Firebird/Windows/AMD/Intel/x64 (access method), version "WI-V5.0.2.1567 Firebird 5.0 9fbd574"
Firebird/Windows/AMD/Intel/x64 (remote server), version "WI-V5.0.2.1567 Firebird 5.0 9fbd574/tcp (Server)/P18:C"
Firebird/Windows/AMD/Intel/x64 (remote interface), version "WI-V5.0.1.1469 Firebird 5.0/tcp (station9)/P18:C"
on disk structure version 13.1

Test without read blob
Elapsed time: 491ms
sum: 72304481272

Test with read blob
Elapsed time: 66672ms
sum: 72304481272
Blob size: 6558101

==== Blob inline test ====

Firebird server version
Firebird/Windows/AMD/Intel/x64 (access method), version "WI-T6.0.0.533 Firebird 6.0 Initial"
Firebird/Windows/AMD/Intel/x64 (remote server), version "WI-T6.0.0.533 Firebird 6.0 Initial/tcp (Server)/P18:C"
Firebird/Windows/AMD/Intel/x64 (remote interface), version "WI-V5.0.1.1469 Firebird 5.0/tcp (station9)/P18:C"
on disk structure version 14.0

Test without read blob
Elapsed time: 500ms
sum: 72304481272

Test with read blob
Elapsed time: 66288ms
sum: 72304481272
Blob size: 6558101

==== Blob inline test ====

Firebird server version
Firebird/Windows/AMD/Intel/x64 (access method), version "WI-V5.0.2.1567 Firebird 5.0 9fbd574"
Firebird/Windows/AMD/Intel/x64 (remote server), version "WI-V5.0.2.1567 Firebird 5.0 9fbd574/tcp (Server)/P18:C"
Firebird/Windows/AMD/Intel/x64 (remote interface), version "WI-V5.0.2.1567 Firebird 5.0 9fbd574/tcp (station9)/P18:C"
on disk structure version 13.1

Test without read blob
Elapsed time: 512ms
sum: 72304481272

Test with read blob
Elapsed time: 46041ms
sum: 72304481272
Blob size: 6558101

==== Blob inline test ====

Firebird server version
Firebird/Windows/AMD/Intel/x64 (access method), version "WI-T6.0.0.533 Firebird 6.0 Initial"
Firebird/Windows/AMD/Intel/x64 (remote server), version "WI-T6.0.0.533 Firebird 6.0 Initial/tcp (Server)/P18:C"
Firebird/Windows/AMD/Intel/x64 (remote interface), version "WI-V5.0.2.1567 Firebird 5.0 9fbd574/tcp (station9)/P18:C"
on disk structure version 14.0

Test without read blob
Elapsed time: 491ms
sum: 72304481272

Test with read blob
Elapsed time: 45596ms
sum: 72304481272
Blob size: 6558101

==== Blob inline test ====

Firebird server version
Firebird/Windows/AMD/Intel/x64 (access method), version "WI-V5.0.2.1567 Firebird 5.0 9fbd574"
Firebird/Windows/AMD/Intel/x64 (remote server), version "WI-V5.0.2.1567 Firebird 5.0 9fbd574/tcp (Server)/P18:C"
Firebird/Windows/AMD/Intel/x64 (remote interface), version "WI-T6.0.0.533 Firebird 6.0 Initial/tcp (station9)/P18:C"
on disk structure version 13.1

Test without read blob
Elapsed time: 513ms
sum: 72304481272

Test with read blob
Elapsed time: 48868ms
sum: 72304481272
Blob size: 6558101

==== Blob inline test ====

Firebird server version
Firebird/Windows/AMD/Intel/x64 (access method), version "WI-T6.0.0.533 Firebird 6.0 Initial"
Firebird/Windows/AMD/Intel/x64 (remote server), version "WI-T6.0.0.533 Firebird 6.0 Initial/tcp (Server)/P19:C"
Firebird/Windows/AMD/Intel/x64 (remote interface), version "WI-T6.0.0.533 Firebird 6.0 Initial/tcp (station9)/P19:C"
on disk structure version 14.0

Test without read blob
MaxInlineBlobSize = 16384
Elapsed time: 1167ms
sum: 72304481272

Test with read blob
MaxInlineBlobSize = 16384
Elapsed time: 2104ms
sum: 72304481272
Blob size: 6558101

It is clear that inline BLOBs (Firebird 6.0, fbclient 6.0) significantly reduce the time of their full reading, but it also becomes clear that if the blobs themselves are not read, but only their id is read, the reading slows down. This is not surprising given that network packets for sending inline blobs are sent idle. That is why the ability to disable this optimization via parameters is important (BLOBs can be read deferred and it is not known when).

fbclient 5.0.2 (BLOB sending optimization) is faster than 5.0.1 by about 30%.

@sim1984
Copy link

sim1984 commented Nov 25, 2024

last test with optimization (stmt->setMaxInlineBlobSize(status, 0); for Test without read blob)

==== Blob inline test ====

Firebird server version
Firebird/Windows/AMD/Intel/x64 (access method), version "WI-T6.0.0.533 Firebird 6.0 Initial"
Firebird/Windows/AMD/Intel/x64 (remote server), version "WI-T6.0.0.533 Firebird 6.0 Initial/tcp (Server)/P19:C"
Firebird/Windows/AMD/Intel/x64 (remote interface), version "WI-T6.0.0.533 Firebird 6.0 Initial/tcp (station9)/P19:C"
on disk structure version 14.0

Test without read blob
MaxInlineBlobSize = 0
Elapsed time: 487ms
sum: 72304481272

Test with read blob
MaxInlineBlobSize = 16384
Elapsed time: 2124ms
sum: 72304481272
Blob size: 6558101

@hvlad
Copy link
Member Author

hvlad commented Nov 25, 2024

@sim1984, many thanks for testing !

At the last test, with setMaxInlineBlobSize(0), there is confusion for 2nd pass (with reading blobs) result: what was the MaxInlineBlobSize value ?

@sim1984
Copy link

sim1984 commented Nov 25, 2024

@sim1984, many thanks for testing !

At the last test, with setMaxInlineBlobSize(0), there is confusion for 2nd pass (with reading blobs) result: what was the MaxInlineBlobSize value ?

There is a default value. I just output it.

std::cout << std::format("MaxInlineBlobSize = {}", stmt->getMaxInlineBlobSize(status)) << std::endl;

@livius2
Copy link

livius2 commented Dec 13, 2024

Thank you @hvlad for implementing this and @sim1984 for testing.
Is there a chance that memory consumption will be equal to current content, not full 16k? Our apps are still 32bit (using 64 bit Firebird of course). As this fix one problem, but create second.

@hvlad
Copy link
Member Author

hvlad commented Dec 14, 2024

Is there a chance that memory consumption will be equal to current content, not full 16k?

Yes, I going to change this.

As this fix one problem, but create second.

Is it measured fact or just a guess ? Note, blobs cache size is limited by 10MB by default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants