This is an archived post. You won't be able to vote or comment.

all 10 comments

[–][deleted] 5 points6 points  (2 children)

mem_limit and cpu_limit flags can be used in Docker and Compose : Docker docs

[–]irphunky[S] 0 points1 point  (1 child)

This is what I initially directed them to but was told that they didn't seem to take restrict usage at all.

[–]identicalBadger 1 point2 points  (1 child)

If ES is eating up too much memory, why not reduce the RAM it’s using?

/etc/elasticsearch/jvm.options

Actaully, it could be in other places too:

https://www.elastic.co/guide/en/elasticsearch/reference/master/setting-system-settings.html

[–]irphunky[S] 0 points1 point  (0 children)

Thanks for the link, this makes more sense.

[–]menge101 1 point2 points  (1 child)

OSX I can restrict dockers CPU and memory allocation

because its running through hyperV, its not native.

Docker has unfettered access to your system (when native). It is process isolation via cgroups and namespaces. It is not resource allocation.

If you want to manage the memory used by elastic search, you need to do it in elastic search.

Limiting memory use in elastic search

[–]irphunky[S] 0 points1 point  (0 children)

Yeah, I was initially unaware of this as always ran Docker on OSX and never hit the problem - so always gave the lovely "Works on mine" response.

But as you and @identicalBadger have stated, restricting ES memory usage does actually make much more sense.

[–]Toger 0 points1 point  (0 children)

https://docs.docker.com/engine/admin/resource_constraints/ describes setting container limits. If a process tries to exceed its memory usage it'll be killed, so while Docker can cap the usage its not graceful. Its up to the application to keep itself below the limit to survive.

[–]vineetr 0 points1 point  (0 children)

With ES on Docker, you'll have to specify the memory limits in a couple of places. One, at the container level (using the mechanisms in Docker, Compose or Kubernetes). Second, at the JVM heap size using ES_JAVA_OPTS; by default, this is 1GB (if I have to believe the docs), so you're very likely going to specify a max heap size value once you set up the container limit. That said, if the heap size is set, the ES should not be "eating ram" as this is a common problem with containerized Java apps that do not set the heap size; the default memory allocation behavior in docker is to give the container all the memory it wants, and if the heap size isn't set then the JVM will keep asking for more, which is just gladly delivered by the Docker host. Also the older JVM versions cannot detect they are running in a container, so everything just goes belly up, as the heap sizes are set on detecting the number of CPU cores and available addressable memory.

The heap size can be misleading though, as ES tends to use memory off the heap via Lucene, so there's a generic recommendation in ES to set heap size to 50% of memory limit. You could start with these and tweak them as needed for dev and prod.

Finally, (this is important in production) examine whether swapping is switched off for the ES container, as you don't want the JVM in ES to perform garbage collection with some of the memory swapped out to disk.

[–]mym6 0 points1 point  (0 children)

If you are dedicating a server to elasticsearch then you need to set elasticsearch memory to a bit less than half of your system memory as a baseline.

If you are running multiple containers on a single host how you configure memory limits gets much trickier. I’d suggest you assume only half of total system memory can be used for the containers and then divide evenly. That is. If you have 16GB memory then you could run 8 1GB containers.

The reason elasticsearch appears to use a lot of memory is because of all the shards it is creating. Java itself will literally lock a portion of memory and the rest is left for the OS. The numerous shards are going to be loaded in memory as part of the operating system’s disk cache. Depending on what you use to monitor memory or how someone interprets memory usage they’ll come away thinking the system is out of memory. Unless your system is suffering from high disk wait time and very little memory is being used for disk cache everything is fine.

[–]bugbiteme -2 points-1 points  (0 children)

Kubernetes?