[OPENAM-9749] Resource leak in ssoadm's audit logging Created: 27/Sep/16 Updated: 10/Oct/17 Resolved: 10/Oct/17
|Fix Version/s:||13.5.1, 14.0.0|
|Reporter:||Sam Phua||Assignee:||Peter Major [X] (Inactive)|
|Attachments:||first-stage.json generate-policy.sh mypolicy.json|
|Sprint:||AM Sustaining Sprint 28, AM Sustaining Sprint 43|
|Support Ticket IDs:|
|Needs QA verification:||
|Are the reproduction steps defined?:||
Yes and I used the same an in the description
Create a json policy file with around 500 policies
Run the following command
The following OOM exception will be observed
Workaround is to import the policies using XACML Format
|Comment by Peter Major [X] (Inactive) [ 28/Sep/16 ]|
The resource leak was actually within the audit log portion of ssoadm. Firstly the Response objects weren't closed, which meant that http-client async processing threads stayed around expecting further data, secondly a new Client object with a new HttpClientHandler object was used, but the HttpClientHandler objects are meant to be closed as well. The solution was to close the response in LogWriter and ensure that the Client object was retrieved via CloseableHttpClientProvider.
|Comment by Andrew Dunn [X] (Inactive) [ 11/Oct/16 ]|
Running several hundred batch "create-agent" commands in a row can result in ssoadm debug IdRepo:
And failed command in batch status file.
It's assumed this is due to the resource leak describe here.
An effective workaround is to append "--nolog" to the ssoadm do-batch command.
|Comment by C-Weng C [ 12/Oct/16 ]|
|Comment by Philip Anderson [ 31/May/17 ]|
Looks to still be a problem in 13.5.1-RC4
I ran the following using attached mypolicy.json :
policies to be created and no errors
371 policies created
Many OOM Errors
|Comment by Jeremy Cocks [ 18/Aug/17 ]|
Note i can replicate Phil's findings using 13.5.1 / AM5 and 5.1. Output similar across all versions noted previously.
|Comment by Peter Major [X] (Inactive) [ 10/Oct/17 ]|
Given that 13.5.1 is now released, I'd say let's open a new issue for this if this is still happening. Chances are there is a different resource leak now.