[OPENAM-12333] AMIdentitySubject policy evaluation not cache when a lot of groups and datastore is use with delegated admin Created: 17/Jan/18  Updated: 12/Apr/19  Resolved: 31/Mar/18

Status: Resolved
Project: OpenAM
Component/s: delegation, entitlements, performance
Affects Version/s: 13.5.1
Fix Version/s: 13.5.3, 6.0.0, 14.1.2, 5.5.2

Type: Bug Priority: Major
Reporter: C-Weng C Assignee: C-Weng C
Resolution: Fixed Votes: 0
Labels: EDISON
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Relates
is related to OPENAM-12525 Privilege evaluation slow when a dele... Open
is related to OPENAM-10931 IdentitySubject not adding isMember()... Closed
Sprint: AM Sustaining Sprint 48, AM Sustaining Sprint 49
Story Points: 3
Needs backport:
No
Support Ticket IDs:
Needs QA verification:
Yes
Functional tests:
No
Are the reproduction steps defined?:
Yes and I used the same an in the description

 Description   

Bug description

When using a full access delegated admin and this user has many groups, there is slowness to access the console for realm related URL. It seems that there is a lot of activity evaluating Group isMember checking with the directory. The delegation evaluation seems to acause a lot of work doing group related checking.

How to reproduce the issue

  1. DataStore (AD, AD2 (can reuse AD but with a unique name ) and one embedded
    (All with DnCache=15000 enabled (does not make a difference).
    Created an AD with 1000 GROUPS

For ($i=2; $i -lt 1000; $i++)

{ $text = "TestGroup2018-$i" New-ADGroup -name $text -groupscope Global }
  1. Create amuser1 and amuser2.
  2. Create an AMGroup with FULL admin privilege
  3. Assign both users to AMGroup
  4. Assign amuser1 with TestGroup2018-* while amuser2 only has AMGroup
  5. Test access to XUI, the access look fine
  6. Add "ismemberof" and "member" to the LDAP User attributes and
    amuser1 access ALL the GROUPS but amuser2 is fine.
    Now this test we make sure the the datastore DOES not have this user
    attribute.
  7. You may want to create 2 subrealms as it would seems this may makes things even slower.

Observation is that access to /global-config/realms?_queryFIlter=true is slow (from the network trace) and many activity the evaluation group isMember/getDN(). Note that amuser2 which has not much group tied to it as well as amadmin is reasonable in console access speed.

You can try creating a new realm using this large group amuser1 and see it very slow (and doing a lot of evaluation)

Expected behaviour
Accessing the console should be acceptable for large groups
Current behaviour
Slow access to when there login in with a delegated admin that has 1000-2000 groups under it. In fact, creating a new realm using this delegated admin is magnitude orders slower.

Work around

Limit the number of groups per delegated admin.

Code analysis

The AMIdentity.equals() when it is a IdType:GROUP will call out to getFullyQualifiedName() to get the actual DN for resolved the group DN for matching. This is a physical DN retrieval.

It also seems that AMIdentitySubject is used for Delegated admin and the fix similar to Subject evaluation cache (OPENAM-10931) is not implemented to AMIdentitySubject so repeated evaluation is not cached. Test suggested this may cut down a lot of redundant evaluation (if repeated access). So similar fix is needed here.

In this test DNCache is enabled and true (large) to avoid repeated DN but if large groups is for this user. This does not cut down things much.



 Comments   
Comment by C-Weng C [ 17/Jan/18 ]

Related in that having the Subject evaluation cache for AMIdentitySubject does cut down on cached evaluation (especially if mutliple realm exists)

Comment by C-Weng C [ 28/Feb/18 ]

Another set of testing: for negative Group DN

Testcase:

1) Install 13.5.1 with OPENAM-10233 (since i want to use some member that may not be found)

Can use an embedded DJ or external (no issue)
Setup the embedded as default

2) Setup another datastore (say use tivoli/AD or whatever)

You can reuse the above embedded DJ but use a new connection

string

Make sure naming/search attribute is "uid"
Set the user container to people and group container to "groups"
Set the base Ldap DN to "ou=tivoli,dc=openam,dc=forgerock,dc=org"

3) Now create the following user

admin (user)
admin2 (user)
OpenAMAdministrators (group)
OpenAMAgents (group)
OpenAMApplications (group)
OpenAMRestUsers (group)

4) Assign privileges to

OpenAMAdministrators (FULL)
OpenAMAgents (FULL+Realm right)
OpenAMApplications (Policy+REST right)
OpenAMRestUsers (Policy+REST rights)

5) Create a new 10 new subrealm. Remove the embedded Datatstore from them and adjust the "tivoli" datatstore to a unique base DN

6) Now assign admin use to OpenAMAdministrators

7) As a hack, got the top realm Tivoli datastore, add "description" to "Attribute Name for Group Membership:"

8) Add 1000 description attributes a new admin2 user "cn=Group<n>, dc=openanm,dc=forgerock,dc=org" and vary n from 0-999. Also add "cn=OpenAMAdministrators,ou=Groups,dc=openanm,dc=forgerock,com,dc=org" so that admin2 is also part of the Admin group

The objective is that the Groups are not found in ANY of the subrealms and other datastore.

Now test...

Test1: Above config

1. amadmin login ok

2. admin login ok (acceptable)

3. admin2 login takes 47 sec for /global-config/queryFilter=true

(times 2 since this is ran twice)

Test2: Increase datastore connection pool to 100 for all datastore

No changes from above

Test3: Increase DN cache

No significant change (PS: This may help for positive DN cache hit but this test setup have a lot of groups that is not matched).

Test4: Increase the EntitlementCache Poolsize from 10 to 32 or 64

Not significant change

Test5: Add fix OPENAM-12333

1. amadmin login now takes 12 sec (2x improvement)

Test6: Artificial avoid the code to query directory for the groups that do exists

1. All and amadmin login in < 1 sec.

  • The DN not found is still cause slowness *
Generated at Fri Sep 25 23:20:55 UTC 2020 using Jira 7.13.12#713012-sha1:6e07c38070d5191bbf7353952ed38f111754533a.