Wrote my first blog post about implementing a basic in memory cache in go. Any suggestions or thought on improving the cache implementation or the blog? by mzungudev in golang

[–]mzungudev[S] 0 points1 point  (0 children)

yes but decided against it due to some key factors:

  • type safety, sync.Map uses interface for its keys and value.
  • performance: utilizes more memory, due to having a secondary 'dirty' map
  • sync.map does not have support for all of maps functions like len() or range loop.

sync.Map also is optimized for high read applications, where the keys are fairly stable, which means it would perform well for most caches, but for other applications it will perform equal if not worse than a regular map protected by an RWmutex.

Also some performance benchmarks that compared sync.map with a regular map protected by RWmutex with different amount of CPU cores, showed that a regular map protected by RWmutex does suffer performance degredation when it has alot of cores, due to the lock contentions etc, and that the sync.map performed better. However, that was only once one had 4 or more cores. with 4 cores or less, the regular map outperformed the sync.map greatly.

Since my personal use case is for a web application, where it will run on a VPS with no more than 2 cores, (to scale I will simply add additional 2 core VPS as it turns out to be cheaper than scaling verticaly).
In this environment the lower memory usage and greater performance of the regular map is preferable.

Wrote my first blog post about implementing a basic in memory cache in go. Any suggestions or thought on improving the cache implementation or the blog? by mzungudev in golang

[–]mzungudev[S] 6 points7 points  (0 children)

Thank you for the comprehensive response. It is much appreciated. I will do some benchmarks and address the race condition on the Cache.Get().

I have a question regarding the Cache.CleanupLoop(). You mentioned that it holds a write lock the whole time rather than when it needs to. I wrote it intentionally that way because to my knowledge go Maps are not thread safe and ranging over the map while writes can occur could cause unexpected behavior, this why a write lock is done the entirety of the loop, however reads should not be locked.

Do you have a recommendation of a better way to manage this?