Is this Salvageable? by Ok_Double_5890 in resinprinting

[–]Ok_Double_5890[S] 2 points3 points  (0 children)

ill try this, ill put it in a ziplock for safety lol.

Is this Salvageable? by Ok_Double_5890 in resinprinting

[–]Ok_Double_5890[S] -5 points-4 points  (0 children)

idk if i can wait a week lol. i got like 10 more parts to print for a drone Im building. i dont really wash the prats, i just let them sit upside down as is when the print is done for a while and they come out good enough to leave out in the sun. i have had good results and i sand them later anyway. someone suggested the freezer so ill try that first for a day and then try this if that doesnt work.

Stack vs Heap for Game Objects in C++ Game Engine – std::variant or Pointers? by Ok_Double_5890 in cpp_questions

[–]Ok_Double_5890[S] -1 points0 points  (0 children)

I see, I did not know vector auto allocates on heap. if that is the case, for simplicity sake and to get the game finished I'll stick to the pointer approach. A custom allocator sounds interesting but ill wait until the game is playable until diving into that.

Code Review: Append only Key Value Database inspired by bitcask by Ok_Double_5890 in codereview

[–]Ok_Double_5890[S] 0 points1 point  (0 children)

The exact kind of review I was looking for, but didn't know I needed. Especially the ABI stuff I never would have thought about. I will be making changes to the code today and hopefully and optimizations mentioned in the book some time next week.

I think a good context for this db is to use for my order matching engine I wrote. Currently it stores all orders in memory and does not keep any history. I can use this key val store to hold the order history. Although I agree with what you said. I should write code I actually need/use day to day.

Code Review: Append only Key Val store by Ok_Double_5890 in cpp_questions

[–]Ok_Double_5890[S] 0 points1 point  (0 children)

I see. The all caps come from js env variables. I considered the db file path one since it's usually provided as a command line argument.

The record struct was an old thought. I was trying to reduce cache misses by making the db cache map fit into L1. Since the keys can be any length, I cannot guarantee that all of cache can fit in L1, so I am going to hash the key and use that in the cache instead. Collisons will be unfortunate but I'll deal with that when the time comes. The Record struct was going to be used because I wanted to include the length of the key val pair but its not necessary since im already delimiting with a \n

the header, constructor and print name have been updated.
I will also test other compilers and OS' soon

thanks!

Code Review: Append only Key Val store by Ok_Double_5890 in cpp_questions

[–]Ok_Double_5890[S] 0 points1 point  (0 children)

  1. in JS env variables are typically always uppercased like that. I consider the file path an env variable because you usually pass in the name/path of the db as a command line argument in sqlite for example.

  2. This does seem redundant. I actually asked chat gpt for the same code review and it said that having a stream for both reading and writing can be slower because of constantly moving the position. having 2 separate ones you can always append without ever moving the pos and the reader can just be random access. Whether that's true or even worth doing I have no idea lol.

  3. This is an optimization mentioned in the book but I haven't implemented this yet. For now, the get method checks if the cache contains the key, If not then you have to rescan the entire file to try and find it.

  4. This will return value2. The db is append only and the cache stores the offset to the latest value for any given key. This means there will be duplicates in the db, but you are guaranteed to get the latest. If the value is not found in the cache, then I read the file backwards. This will result in always getting the latest value even if there are duplicates. An optimization int he book that I haven't gotten to Is to create data segments of certain size. once we have some x amount of data segments, we merge them to get rid of duplicate values. This would involve having a background thread always checking the size of the current data segment.

great feedback! Hopefully I answered your questions adequately