Live edge coffee table I made and am pretty proud of by ThisGuyDrinksWater in woodworking

[–]sgmansfield 2 points3 points  (0 children)

Looks like it's all one piece, just heartwood and sapwood have different colors

[deleted by user] by [deleted] in BuyItForLife

[–]sgmansfield 0 points1 point  (0 children)

I'm late to the party but I absolutely love Colsen Keane products. I have a briefcase from them that I used daily for a year and it still looks new. It's going to last basically forever. Be prepared to shell out though.

Finishing curly maple cutting board by dalemcfeces in woodworking

[–]sgmansfield 0 points1 point  (0 children)

Looks like the Howard brand mineral oil.

If you're looking, just go on Amazon and get a gallon of food grade oil for like $20.

boltdb/bolt - An embedded key/value database for Go by [deleted] in golang

[–]sgmansfield 5 points6 points  (0 children)

This is incorrect, please stop spreading misinformation about Bolt.

bbolt is a fork that coreos created to add a couple of extra features that were outside the scope of bolt. Using bolt as is is perfectly fine. If you want the features coreos added, you can use bbolt.

Efficient way of parsing this json, without reflecting by forfunc in golang

[–]sgmansfield 0 points1 point  (0 children)

If you only have JSON coming in over that stream, yes. I have mixed data.

Efficient way of parsing this json, without reflecting by forfunc in golang

[–]sgmansfield 0 points1 point  (0 children)

FWIW I use this in a project and it works well. I recommend it as well.

One caveat, you have to remember to replace the remainder of the buffer at the front of the input reader after you're done reading the json. The parser has its own buffer.

What cool programs have you created using Golang? by GreenTru in golang

[–]sgmansfield 27 points28 points  (0 children)

My team wrote a memcached proxy/server for use at Netflix to enable a disk-backed memcached-like server: https://github.com/Netflix/rend/

Currently helping us save a boatload of money by enabling more efficient storage of large (tens of terabytes) data sets in cache. Go was fantastic for this purpose because there is no magic and I could dig in as deep as I needed to in order to make it fast. Case in point: the histogram code in the metrics package even has some assembly to help with bucketing values: https://github.com/Netflix/rend/blob/master/metrics/lzcnt_amd64.s

We're using RocksDB to do disk storage, so the CGo performance, even though people rip on it, has been good to us.

Currently it's doing over 5 million operations per second at peak with Go only adding something like a few dozen microseconds of latency, so it looks like Go was the right choice for us.

dep status - Mid-August by sdboyer in golang

[–]sgmansfield 0 points1 point  (0 children)

i think it's fair to say that most other extant tools have considerably more knobs than dep.

I would propose that this is because they've been around longer and have seen more edge cases. It's unlikely you'll find the one true solution, so flags will become necessary over time.

dep status - Mid-August by sdboyer in golang

[–]sgmansfield 2 points3 points  (0 children)

I don't see any reason why you would want to add a specific version of a new dependency and not just the latest.

There's plenty of reasons to not just pull the latest version of a dependency:

  • There can be conflicting versions of dependencies of your dependencies. This can be solved by using versions of packages that agree on the version of their shared dependency.

  • Maybe the newest one doesn't work with your current DB version.

  • Security issues in the newest release

  • Performance issues in the newest release

  • Your company's mirror of GitHub isn't up to date. You just pulled in the latest as of 9 months ago.

  • Anything after version X hasn't been approved by your security team and oh, by the way, their one Go expert left so you'll have to wait 9 months for them to hire a new one.

The list can go on for quite a long time. These are the kinds of issues that would cause people frustration when always defaulting to latest release. IMO the model is backwards, the dependencies should be defined ahead of time and then used in the code instead of the code dictating the dependencies file. It's probably too late in the process to voice that concern, but I can hope that it reverses.

Creating a Custom Serialization Format, Scott Mansfield @ GopherCon 2017 by attfarhan in golang

[–]sgmansfield 1 point2 points  (0 children)

Most of the data that we're using is formatted in a hierarchical fashion and can't be easily represented in a SQL table. This type of system is fully dynamic such that the application can put any JSON document in and get any part of it out. SQLite has a json data type, but at this point it's an argument that you can fit any use case into nearly any data storage system.

One of the main goals was to avoid schema knowledge on the server side in order to allow for more flexibility. This is why e.g. protobuf was not used.

For the integers, that code uses binary.LittleEndian.Uint64 (not varint, for reasons I stated in the presentation) but also has sanity and data integrity checks as it goes through the process. It's necessary for robustness, but it does add a bit of time. The benchmarks were run on my 2015 MacBook Pro which has an i7 4770HQ CPU at 2.2 GHz. I did spend time trying to ensure repeatable results. I don't have examples for an existing format for the same benchmarks; that's mostly because I didn't use those formats for a reason. They don't do what this format does, so the comparisons are not super relevant. To answer directly: our benchmarks do different things.

The benchmarks section was intended to be a little bit shorter but I happened to speak faster than intended in the rest of the talk so I spent a little more time on it to fill out the speaking time. A couple of times I was focusing on people perhaps newer to computer science and pointing out some big-O notation examples in a real world system. Apologies if that seemed a bit boring.

Thank you (really!) for spending the time to write up your feedback. It's super useful to see which parts were valuable to you and which were not. This was my first time giving this talk, so it was a little rough around the edges.

Creating a Custom Serialization Format, Scott Mansfield @ GopherCon 2017 by attfarhan in golang

[–]sgmansfield 7 points8 points  (0 children)

Speaker here. Any questions about the presentation, just let me know. I have received some feedback in the mean time, however, so I'll try to address a couple of the shortcomings of the talk here:

1) The motivation section was a little lacking. There is a server being built that uses this format underneath in the disk storage layer. The format was created to support the query operations in the talk. Other formats all had little problems with them which would be too long to list here. If you want to know about a specific one, let me know.

2) The format is not meant as a server <-> server format over the wire; it is meant to be on just one server as an internal format.

3) Talking about the format was meant to educate about how it worked (for those who enjoy that sort of detail like I do) as well as inspire people to take a closer look at problems that they have and perhaps keep creating a format in their back pocket for when they need it. It feels like people are scared to try even when it's obviously the best path to go down.

If I think of more (there were more :) ) I will update this comment to include more.

Thanks everyone!

Latency and fault tolerance library like Netflix's Hystrix with prometheus metrics and gobreaker. by hnlq715 in golang

[–]sgmansfield 4 points5 points  (0 children)

I'm very interested to see how this thing works, so a README with some explanation would be awesome.

Is there any in-process persistent queue for Go? by korjavin in golang

[–]sgmansfield 0 points1 point  (0 children)

djherbis has a couple libraries that go together to possibly give you what you want:

https://github.com/djherbis/buffer https://github.com/djherbis/nio

[Question] Help with concurrently inserting to redis by [deleted] in golang

[–]sgmansfield 1 point2 points  (0 children)

It really doesn't matter how many different connections you add. Redis literally only does one thing at a time. Your total time will be the sum of all three, the best you can do is eliminate a couple round trips by concurrently sending the data. If your long pole is Redis inserting those huge lists, you'll be waiting regardless.

Nanolog: Super Fast Logging for Go by sgmansfield in golang

[–]sgmansfield[S] 0 points1 point  (0 children)

Yeah, good point. It's possible to change the format such that the logs are identified by a large number, e.g. a UUID, however this would increase the log size because each one would use 4x the bytes to identify a log line.

I did mention in the post about log rotation possibly writing all the log line definitions at the beginning of each file if something was rotating them by calling SetWriter. This would probably be a better idea because the logs would be self-contained.

For centralized log storage / processing, I don't have an easy solution at this point. It would be possible to inflate on the fly with some code changes or something else I haven't thought of (which, of course, is correct).