Operations which very frequently access large entries, especially those modifying them, are likely to have a high resource impact:
- increased CPU costs decoding and encoding the large entries at the DB and network layer
- increased network costs transferring the large entries (if they are unfiltered) over the network
- increased disk costs reading and writing the entries to the DB.
The last point is particularly important.
This issue can be closed once we have monitoring metrics similar to these (we may want different thresholds):
The attributes would contain a histogram metric similar to our other operation based metrics:
- which component are the metrics associated with? The connection handlers? Backends? Global?
- how do we measure the size? The best candidate is likely to be at the DB layer. We could record the size at the point where we read/write id2entry. However, abstraction layers may make it hard to propagate this information up the call-stack so that it can be reported correctly. Yannick Lecaillez has suggested systematically adding a virtual attribute at the DB layer. It would bypass the virtual attribute rule config, but perhaps it doesn't matter.