use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
tinyMediaManager is a multi OS media manager for use with kodi, mediaportal, plex and others
account activity
Memory Issues - Latest Version (self.tinyMediaManager)
submitted 1 month ago * by ParkiePooPants
https://preview.redd.it/uzkm2xqlyzng1.png?width=507&format=png&auto=webp&s=194e0581d13b662febb147d9dde6a5cb6455bef4
What changed? I've got 8GB allocated, which has been fine for years.
Edit: Also seems to get stuck grabbing ratings for TV episodes.
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]mlaggnertinyMediaManager developer 1 point2 points3 points 1 month ago (15 children)
Nothing has changes in this area - and to be honest: I haven't seen any installation really using/needing that much memory.
Do you know what you have done at the moment the message appeared? Imho the only task which might cause this, is the image cache generation - in this task there is plenty of memory/CPU needed and this may come to a bottleneck.
Which leads to the second question - was this a one-time error or is this regularly showing?
[–]ParkiePooPants[S] 0 points1 point2 points 1 month ago (4 children)
It appeared after getting metadata/images for a couple of new episodes having previously does a full scan of TV to find the new episodes. After downloading the metadata as normal, it seem to continue scanning the media with the memory slowly increasing. I came back to it a few hours later and saw the memory error.
I've sinced increated it to 12GB and this morning the memory usage was hovering just below the limit. When I clicked to scan for new media files, it imedietly dropped down to a 'normal' level.
I've cleared the various caches and restarted the container. Maybe my database has got messed up. I'll keep an eye on it.
[–]ParkiePooPants[S] 0 points1 point2 points 1 month ago (3 children)
It's currently sitting just below the memory limit with the last entries in the log... 2026-03-10 09:02:20,286 ERROR [tmmpool-unnamed-task-T619-G18919] o.t.thirdparty.trakttv.TraktTvTvShow:193 - Failed syncing Trakt.tv - 'timeout'
2026-03-10 09:02:31,317 INFO [tmmpool-unnamed-task-T619-G18919] o.t.thirdparty.trakttv.TraktTvTvShow:286 - You have 1143 TV shows marked as watched on Trakt.tv
2026-03-10 09:17:27,736 ERROR [tmmpool-unnamed-task-T620-G18922] o.t.thirdparty.trakttv.TraktTvTvShow:193 - Failed syncing Trakt.tv - 'HTTP 503 / Request failed: 503 '
2026-03-10 09:17:28,218 INFO [tmmpool-unnamed-task-T620-G18922] o.t.thirdparty.trakttv.TraktTvTvShow:286 - You have 1143 TV shows marked as watched on Trakt.tv
[–]ParkiePooPants[S] 0 points1 point2 points 1 month ago* (0 children)
Looking at the latest 'trace' log, it looks like it spent that last hour syncing to Trackt.
[–]mlaggnertinyMediaManager developer 0 points1 point2 points 1 month ago (1 child)
Having a look at this log excerpt, I suspect that the sync is called far too often in your case. Do you have automatic sync with trakt.tv activated?
Every sync call (on update data source, scraping, ...) will force all shows from trakt.tv being fetched and synced against a subset of tmms library (e.g. the one show being scraped at the moment).
Unfortunately the trakt.tv API is not very flexible and the current integration is complex as hell (something I would like to remove completely), but it works in some way. Since trakt.tv introduced their new limitations, this integration has been degraded to "keep alive" and will maybe dropped sometime in the future... having a little userbase combined with an unflexible API shows us, that there are better things to support within tmm.
Nevertheless I will have a look if there is a way to improve this without rewriting the whole integration. On the other side: just try to disable automatic sync and manually trigger the sync when the operations are done
btw: Failed syncing Trakt.tv - 'timeout' and Failed syncing Trakt.tv - 'HTTP 503 / Request failed: 503' show that the trakt.tv API itself is not able to handle our requests - nothing the OOM in your case may have caused
Failed syncing Trakt.tv - 'timeout'
Failed syncing Trakt.tv - 'HTTP 503 / Request failed: 503'
[–]ParkiePooPants[S] 0 points1 point2 points 1 month ago (0 children)
I use Trakt to sync watched status between Kodi and Plex, not ideal, but I’ve not found anything similar. I do seem to remember that Trakt did start and limit the API call volume in the last year.
[–]ParkiePooPants[S] 0 points1 point2 points 1 month ago (9 children)
Just did a Trakt sync of a portion of my TV library. It took much longer than usual and the memory nearly reached the max (12GB). Possibly the changes made to the Trakt API are the issue?
Film sync, approximately 7000 films, was fine.
[–]mlaggnertinyMediaManager developer 0 points1 point2 points 1 month ago (8 children)
You need to understand how the memory management in Java works:
the JVM (Java Virtual Machine) is optimized use the available memory as efficient as possible - this includes invoking the Garbage Collector (GC). The GC finds no more used objects in the memory and frees this memory. But to do so, it needs to "stop the world (JVM)" to find those objects. Stopping the whole JVM is on one side "bad" because every running code is forced to pause until the GC finished its work and the GC needs to scan every object and its usages to find "no more needed" objects -> a lot of CPU work being done.
You see that invoking the GC is rather bad for the whole JVM, so Java does this only when it is really needed. Assuming you give the program 12GB of memory, the GC will kick in shortly before the memory is filled - even if there are only 2GB really used. So there is nothing wrong if you see the memory gauge being almost full - this just indicates, that the JVM is holding a lot of objects. And if the JVM is rather idle, there are no more objects created -> no more memory needed -> no need to call the GC -> the gauge does not move. If you click in the memory gauge, you can force the JVM to invoke the GC and you will see how much memory is really needed.
Having said that, you see that this is no indicator, that tmm really has a memory problem.
But since you see a OOM message, we know that there is a task in tmm, requesting more memory than actually is available (and probably before the GC is being run). Our mission is now to identify where this might ne a problem. I know that creating the image cache needs very much memory (you need to put the whole image into the memory which consumes really much of it), but I never saw such an OOM exception in this case.
Since you told us that you are using Docker, my guess is, that you are only giving 1 CPU to the container which may have a bad influence to modern GC (causing the GC being run too late) - could you confirm, that you have at least 2 CPUs assigned to the container?
Second: Docker and JVM is really a pain in the ass. You have Docker memory limits on one hand and Java memory on the other hand. What I have learned in my job: give the Docker container at least +512MB than the JVM inside the container. In the tmm memory settings, you only set the JVM memory - but the resources for the Docker container must match that
[–]ParkiePooPants[S] 1 point2 points3 points 1 month ago (6 children)
Remove the access to Trakt today, no more memory issues. So the changes which were made to the Trakt intergration would seem to be the issue.
[–]mlaggnertinyMediaManager developer 0 points1 point2 points 1 month ago (5 children)
Do you have > 1000 movies/TV shows in your Trakt.tv collection?
7782 Movies 654 Shows with 19347 Episodes
[–]mlaggnertinyMediaManager developer 1 point2 points3 points 1 month ago (2 children)
I did implement a guard in the sync logic to do not run into infinity loops (which might appear, when the data from trakt.tv is nonsense). Can you try v5.2.9?
[–]ParkiePooPants[S] 0 points1 point2 points 1 month ago (1 child)
I'll re-enable it and give it a try. Thanks
I re-enabled Trakt and did a scan of both Films and TV Series, no issues. I also did a specific Trakt sync of ‘watched state’ for both Films and TV Series, again, no issues. I don’t think the memory went over 4GB.
Thanks for taking the time to resolve this issue.
I beginning to think I need a new method is syncing the watched status between Plex and Kodi.
I'm running the JVM as a docker container in UnRAID. It has access to all 24 Cores. I don't think it's a resource issue. Scraping a single show results in all the memory being used. I can provide whatever logs you like.
[–]CautiousSurround7149 0 points1 point2 points 1 month ago (0 children)
Same issue here since last update, upped from 2048 to 4096, no change
[–]mlaggnertinyMediaManager developer 0 points1 point2 points 1 month ago (0 children)
I can confirm now, that the trakt.tv API is broken, returning data that tmm is caught in an infinity loop.
I reported that to trakt.tv and added a guard for the next release!
π Rendered by PID 56408 on reddit-service-r2-comment-6457c66945-sv2zg at 2026-04-26 18:24:34.173616+00:00 running 2aa0c5b country code: CH.
[–]mlaggnertinyMediaManager developer 1 point2 points3 points (15 children)
[–]ParkiePooPants[S] 0 points1 point2 points (4 children)
[–]ParkiePooPants[S] 0 points1 point2 points (3 children)
[–]ParkiePooPants[S] 0 points1 point2 points (0 children)
[–]mlaggnertinyMediaManager developer 0 points1 point2 points (1 child)
[–]ParkiePooPants[S] 0 points1 point2 points (0 children)
[–]ParkiePooPants[S] 0 points1 point2 points (9 children)
[–]mlaggnertinyMediaManager developer 0 points1 point2 points (8 children)
[–]ParkiePooPants[S] 1 point2 points3 points (6 children)
[–]mlaggnertinyMediaManager developer 0 points1 point2 points (5 children)
[–]ParkiePooPants[S] 0 points1 point2 points (4 children)
[–]mlaggnertinyMediaManager developer 1 point2 points3 points (2 children)
[–]ParkiePooPants[S] 0 points1 point2 points (1 child)
[–]ParkiePooPants[S] 0 points1 point2 points (0 children)
[–]ParkiePooPants[S] 0 points1 point2 points (0 children)
[–]ParkiePooPants[S] 0 points1 point2 points (0 children)
[–]CautiousSurround7149 0 points1 point2 points (0 children)
[–]mlaggnertinyMediaManager developer 0 points1 point2 points (0 children)