[OPENDJ-431] Server-side sort control only works on result sets of less than 100000 entries Created: 01/Mar/12  Updated: 08/Nov/19  Resolved: 18/Apr/18

Status: Done
Project: OpenDJ
Component/s: core server
Affects Version/s: 2.6.0
Fix Version/s: 6.0.0

Type: Bug Priority: Minor
Reporter: dlange Assignee: Matthew Swift
Resolution: Fixed Votes: 0
Labels: Verified, release-notes

Issue Links:
relates to OPENDJ-2357 OpenDJ: debugsearchindex incorrectly ... Done
is related to OPENDJ-4367 CURSOR_ENTRY_LIMIT should be configur... Done
QA Assignee: Viktor Nawrath [X] (Inactive)
Support Ticket IDs:


Reproduction steps:
1) Create an ordering index on a particular attribute.
2) Create several hundred thousand objects that have that attribute.
3) Perform a paged search with the filter: "(attribute>=\00)" and a server side sort control on the attribute.

Expected result:
Return all the objects that have the attribute in sorted order.

Actual result:
The search fails with the error message: "The search results cannot be sorted because the given search request is not indexed."

It appears that even though there is an index on this attribute, the search code is written to ignore the index for queries that return more than 100000 entries.

If I understand the source code correctly:
org.opends.server.backends.jeb.Index has a cursorEntryLimit parameter that controls the maximum number of results that may be returned by the index.
But this parameter is not configurable. It is always hardcoded to 100000 in org.opends.server.backends.jeb.AttributeIndex

Comment by Matthew Swift [ 02/Mar/12 ]

That's correct, the limit is hard coded to 100,000.

Comment by Matthew Swift [ 02/Mar/12 ]

Consider this as a candidate for 2.5.0.

We should first investigate why this test case is not working. In particular, will it work if the user has the unindexed search privilege? What is it about the search which is triggering the failure - the filter or the sort control?

An obvious approach would be to add a property to the JE backend configuration whose default value is 100,000, but which also supports "unlimited". However, the limit is there for a reason: to prevent potential heap exhaustion while processing very large ID lists. Can this be avoided?

Comment by Ian Packer [X] (Inactive) [ 01/Oct/15 ]

This is also relevant in any case when an index is used (not just sorting).

A real world example might be OpenAM CTS trying to search for expired tokens with an ordering index/bounded date filter, and where the number of expired tokens has been allowed to reach 100k+.

In 2.6.x hitting this limit abandons the index and returns a completely empty EntryIDSet leading to NOT-INDEXED like symptoms (can be quite confusing).

In current trunk this behaviour is now a bit nicer, returning LIMIT-EXCEEDED:

              if (totalIDCount > IndexFilter.CURSOR_ENTRY_LIMIT)
                // There are too many. Give up and return an undefined list.
                // Use any key to have debugsearchindex return LIMIT-EXCEEDED instead of NOT-INDEXED.
                return newUndefinedSetWithKey(cursor.getKey());
Comment by Matthew Swift [ 06/Jan/16 ]

Re-opening for re-evaluation.

Comment by Matthew Swift [ 18/Apr/18 ]

Joseph de-Menditte / Yannick Lecaillez - I think we can close this issue. Do you agree?

  • the cursor entry limit is now configurable
  • server side sorting now makes better use of ordering indexes where possible
  • TTL feature eliminates the main use case.
Comment by Joseph de-Menditte [ 18/Apr/18 ]

Yes for these two, AFAIK

  • the cursor entry limit is now configurable
  • TTL feature eliminates the main use case
Comment by Matthew Swift [ 18/Apr/18 ]

Resolving this issue as fixed in 6.0.0 for the reasons described above.

Comment by Viktor Nawrath [X] (Inactive) [ 19/Apr/18 ]

Verified using DS 6.0.0-RC2.

We now all three features mentioned in the previous comments implemented and tested.

Generated at Tue Nov 24 21:12:45 UTC 2020 using Jira 7.13.12#713012-sha1:6e07c38070d5191bbf7353952ed38f111754533a.