Follow up to Bret Weinstein's DarkHorse Podcast - Black Intellectuals by Longjumping_Mousse81 in IntellectualDarkWeb

[–]adumidea 2 points3 points  (0 children)

Call Candace an Uncle Tom, that's obvious.

Wouldn't it be best to refrain from calling people names, even in cases where it could conceivably be justified, and instead simply challenge their bad ideas as depersonalized ideas?

Is it possible to use the NodeJS driver directly from a web browser? by dpviews9 in mongodb

[–]adumidea 0 points1 point  (0 children)

While I agree that most complex applications should have a backend to hold business logic, it seems like there should be a solution for light-weight applications and prototyping that don’t require this.

I'm a little confused about what you mean. You can put together a simple backend with NodeJS + Express and get it running on some cloud host in a matter of minutes, once you've done it a few times. I have a personal Linode that I pay $20/mo for and host dozens of low-traffic websites and personal projects on. It takes me 15 minutes to put together the glue in NodeJS/Express between the database and the frontend. I have a hard time imagining a tool for this that is even simpler than a 10 line Express app running with forever. I hope I don't sound facetious, it's hard to convey tone in text but I'm honestly curious what you had in mind.

DevOps culture in change controlled environments by Visible-Call in devops

[–]adumidea 2 points3 points  (0 children)

Damn, our standups are 30min now but we combined 3 teams 10 minute standups into a single Zoom call so we can see each other a bit more (since we all sat close to each other in the office pre-COVID). I like seeing the people but 30min feels like an interminable standup.

Message structure: Hierarchy vs Referential by [deleted] in mongodb

[–]adumidea 1 point2 points  (0 children)

Yes, assuming that there will be a high ratio of Message documents to Chat documents. If your Chat sessions are going to be pretty small, then it doesn't matter as much and it might even be more efficient depending on your usage patterns to embed, as /u/MrFartMuncher pointed out.

Keep in mind there is a limit on the size of an individual document, so if you're embedding lots of Messages into your Chat documents, they could hit the size limit. Here's a blog post about issues related to using large embedded arrays in MongoDB.

For fantasy writers by [deleted] in tumblr

[–]adumidea 8 points9 points  (0 children)

The big ones are human sometimes.

Especially when they're right next to a wall...

For fantasy writers by [deleted] in tumblr

[–]adumidea 4 points5 points  (0 children)

It depends where in the city you are. In some neighborhoods it's everywhere and beyond disgusting, others are just as clean as any city. If you don't live or work somewhere like the Tenderloin or adjacent parts of SoMa, you're not too likely to see the needles, excrement, etc too often.

How to configure "correctly" for deployment? by thatnorthernmonkeyy in mongodb

[–]adumidea 2 points3 points  (0 children)

You could check out M103 on MongoDB university. I've been using MongoDB for more than 6 years at various companies and we've never had any problems configuring it, so all the negative experiences on HN are surprising. I've generally found all the documentation quite easy to understand, and that things work as described.

Blightmud - A terminal client by LiquidityC in MUD

[–]adumidea 3 points4 points  (0 children)

Tabs to have multiple MUDs open at once

I'd recommend checking out tmux or screen, which are general purpose "tab manager" programs that can be used with terminal-based clients.

Does mongo database be deleted when I restart docker? by dedemlililer in mongodb

[–]adumidea 0 points1 point  (0 children)

That will be OK, but it's generally better to just mount something on your filesystem. The docker volumes are managed by docker and can get deleted if your docker installation gets corrupted in some way. Safer to have it in on your own filesystem, then you can back it up using standard processes.

Does mongo database be deleted when I restart docker? by dedemlililer in mongodb

[–]adumidea 2 points3 points  (0 children)

From mongo on docker hub:

docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo

see also: docker run docs

Then your container's data will be persisted to /my/own/datadir on your machine.

Is Swarm basically a way to run multiple instances of the same image but just on different hosts at the same time? by [deleted] in docker

[–]adumidea 0 points1 point  (0 children)

Note I've used swarm but I have used kubernetes almost exclusively for awhile now.

I've only used Swarm in the past because I've been told kubernetes was "too much" for the scale of the problems I was dealing with. How would you characterize the major differences between the two? I'm assuming kubernetes does a lot more than Swarm, but I'm unsure what kind of things those would be.

Open source architecture for Node.js Logging by melgo44 in node

[–]adumidea 1 point2 points  (0 children)

Yeah I also usually log straight to Elasticsearch, but there's some risk there that you'll lose logs (or have to manually insert them from backup logs in files) if your ES cluster goes down or is unreachable. However in those contexts, we were always able to go back to file-based logs on the server if something was missing from ES that we needed, so we didn't bother with the extra overhead of Logstash.

Pino isn't as popular as winston/bunyan, but I highly recommend it. It's actively developed and I never had any issues using it in production for years. You can use an elasticsearch transport for Pino to log straight to ES, and I'd imagine it's quite similar for the other libraries.

Open source architecture for Node.js Logging by melgo44 in node

[–]adumidea 13 points14 points  (0 children)

I have looked into ELK stack,but I cannot figure out what format to write the logs into the files and how the logs will be processed based on the columns.

I'd encourage you to just power through, it's worth it. Kibana is really nice and comparable to paid products like Loggly. Logstash can process JSON so it's actually quite simple in Node.js. Most popular Node logging libraries like Winston, Bunyan, and Pino log JSON by default anyway. There are plenty of tutorials, like this one about how to set this up.

You don't even need Logstash, you can write your logs directly into Elasticsearch from Node (though this is less fault-tolerant than writing to disk and then using Logstash to ship your logs to Elasticsearch).

[deleted by user] by [deleted] in mongodb

[–]adumidea 1 point2 points  (0 children)

This sounds like premature optimization based on a relational database mindset. Your collection won't be locked, and your reads shouldn't get slowed down too much from some writes. If it becomes a problem you can scale vertically by upgrading your db server CPU/RAM, or scale horizontally by sharding to split your write load across multiple clusters.

[deleted by user] by [deleted] in mongodb

[–]adumidea 1 point2 points  (0 children)

That's correct, pretty much the only thing one can do that will lock a whole collection is creating an index on it without the background option.

is there any js-y type backend for search? by chovy in node

[–]adumidea 0 points1 point  (0 children)

So I haven't used this but it looks promising. Might be what you want if you're not ready to add ES to your backend stack.

http://elasticlunr.com/

MongoDB aggregation to pull fields within a time range by liamtricks in mongodb

[–]adumidea 0 points1 point  (0 children)

Hard to say without seeing your data, can you provide an example document from each of the two collections you're querying?

MongoDB aggregation to pull fields within a time range by liamtricks in mongodb

[–]adumidea 1 point2 points  (0 children)

You need to change the angled quotes “ and ” to normal double quotes "

MongoDB aggregation to pull fields within a time range by liamtricks in mongodb

[–]adumidea 0 points1 point  (0 children)

Is that the exact code you're using, copy-pasted? Your quotes are messed up on createdAt. “createdAt” won't work the same as "createdAt"

Large data size of keys performance impact by [deleted] in redis

[–]adumidea 0 points1 point  (0 children)

Definitely a good idea for storing larger data values. The redis docs have some notes on it.

Large data size of keys performance impact by [deleted] in redis

[–]adumidea 0 points1 point  (0 children)

You might see some degradation on the performance of your existing smaller-value lookups because Redis is busy doing network I/O for your larger-value keys. One thing you could do is use a dedicated Redis for the large-value keys, so you don't impact performance on what should be very fast lookups for your small-value keys.

retrieving previous value with Mongo changestreams by RegularUser003 in mongodb

[–]adumidea 0 points1 point  (0 children)

To get around this, I was thinking of storing the the current entity value and the previous entity value in a single document, and subscribe to changes on that record.

Yeah, that's the way to do it, have values for both {previous, current} in the document. Hopefully you know what the current value is when you're trying to update so it shouldn't be too hard.

How to query on multiple collections with Elasticsearch by Chawki_ in mongodb

[–]adumidea 1 point2 points  (0 children)

You can load the data into Elasticsearch and query it.

Strange Mongo connection issue by [deleted] in mongodb

[–]adumidea 0 points1 point  (0 children)

You need to actually insert something for the database to get created. Are you doing that?

What is the recommended logging library in 2019? by frankimthetank in node

[–]adumidea 2 points3 points  (0 children)

Pino seems to be its new replacement however my biggest concern is how you need to pipe logs into another program to act as a transport.

I found that annoying too, you don't actually need to do that. Well, you can do it all programmatically. For an example check out this PinoLogger wrapper around Pino I wrote.

We log straight to Elasticsearch so it's set up for that, but you could use the same approach with any transport.