[bisq-network/bisq] Bisq using up too much memory on Linux even after closing (#3918)

Stan notifications at github.com
Wed Mar 4 00:10:49 UTC 2020


What I think is happening on my Linux machine is the java vm thinks it has 128 GB of RAM to work with.

My OpenJDK 10 and 11 VMs have a very large default -XX:MaxRAM setting of 128 GB.  It looks like libc malloc()s are creating a lot of extra memory segments via mmap() syscalls, but not using heap memory for alloctions via brk() calls.  During tracing sessions, I saw mmap() calls, but not brk() calls.

The MaxRAM setting is described in the OpenJDK source.

	OpenJDK 9:
		https://hg.openjdk.java.net/jdk9/jdk9/hotspot/file/b756e7a2ec33/src/share/vm/runtime/globals.hpp#l2030	
		product_pd(uint64_t, MaxRAM, "Real memory size (in bytes) used to set maximum heap size") 
		range(0, 0XFFFFFFFFFFFFFFFF)                    
	OpenJDK 11:
		https://hg.openjdk.java.net/jdk/jdk11/file/1ddf9a99e4ad/src/hotspot/share/runtime/globals.hpp
		define_pd_global(uint64_t,MaxRAM,                    1ULL*G);

You can check you own VM's default MaxRAM setting:

	OpenJDK 11
	$ java -XX:+PrintFlagsFinal -version | grep MaxRAM
		 uint64_t MaxRAM = 137438953472
		 
	OpenJDK 10:
	$ java -XX:+PrintFlagsFinal -version | grep MaxRAM
	 	uint64_t MaxRAM = 137438953472

	137438953472 bytes is 128 gigabytes


I can set MaxRAM=2GiB in JAVA_OPTS  ->  export JAVA_OPTS="-XX:MaxRAM=2147483648", and Bisq's RES (SEE htop) stays below ~1.2 GB, instead of growing to ~4.8 GB.  (But I haven't let it run for days...)


----

After libc malloc()s expand the extra virtual memory via mmap() syscalls, the app causes MMU violations when it tries to use the allocation memory range(s) and finds no mapping for an address or addresses.  

I saw MMU errors in GC logs last night, when using -XX:+ExtendedDTraceProbes + GC logging, but only today realized there may be a connection to my VM's detaults MaxRAM=128GB setting. I had assumed it was due to the extra overhead from using -XX:+DebugNonSafepoints -XX:+PreserveFramePointer, but especially due to -XX:+ExtendedDTraceProbes.

I believe these MMU violations in turn result in page faults, and causes consumption of extra pages of physical memory -- the RES, or resident set size you can see in htop.

Using 	$ pmap -x  $(pgrep -f BisqAppMain) | more 	...

I find a memory mapped file with an RSS ~3.8 GB

           0000000707200000 3994624 3984976 3984976 rw---   [ anon ]

I don't know what this anonymous mapped file is, but setting the VM's MaxRAM to 2GB in JAVA_OPTS reduces this mmapped file's size to ~0.5 GB.

Corresponding to this huge mmapped file size is htop's RES value of ~ 4.8 GB, and VIRT ~10.4 GB.

---

Here are measurements after starting Bisq with an empty data.dir, using MaxRAM=2GB, and letting it run for 30 minutes (not touching the GUI).

	export JAVA_OPTS="-XX:MaxRAM=2147483648"  (1.5 GB is too little and the OOM Killer crashes Bisq)

	RES	1118 MB,  VIRT 6986 MB		(SEE $ htop)
	
	The java process' largest memory mapped file is ~0.5 GB  (SEE $ pmap -x $(pgrep -f BisqAppMain) | more)	
	00000000e0000000  524288  524288  524288 rw---   [ anon ]
	
---

I have some flame graphs showing very large differences in the number of page faults during two profiling runs, each of 120s in duration, begun 30s after bisq was started with an empty data.dir.
	bisq-page-faults-2g-maxram.svg 		shows 110,393 pages 
	bisq-page-faults-128g-maxram.svg	shows 954,347 pages
Unfortunately, I can't upload svg files here.  I'll try to recreate them tomorrow as PNG files; 
 in the meantime I'll see if I can upload them to keybase "standup".




-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/bisq-network/bisq/issues/3918#issuecomment-594236960
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.bisq.network/pipermail/bisq-github/attachments/20200303/e6ff3ddc/attachment.html>


More information about the bisq-github mailing list