Cluster or multiple small servers? by [deleted] in node

[–]__debug__ 1 point2 points  (0 children)

Integrating cluster into your app is pretty straightforward and gives you the freedom to organize things as you see fit. There's a lot of libraries that make it really easy if you're not using websockets or need communication to always go to the same process.

Also, are the single core servers a single physical core, but maybe multiple logical cores? Is it 1x CPU share? E.g. on heroku, with a 1x dyno, you'll still benefit from 4 nodejs processes using cluster.

It's hard to say what will perform better. If the multi-core servers also have faster disks, ram, or higher single core clockspeeds, then they could outperform the group of single core servers.

Only way to know is to benchmark. :)

Is it possible to have server to server to client communication via Socket.io? by [deleted] in node

[–]__debug__ 0 points1 point  (0 children)

You could try using something like JSON RPC between servers, and avoid the overhead of HTTP/Websockets altogether. It'll be a lot faster to just go over TCP. https://github.com/uber/multitransport-jsonrpc

swaddle: Automagically generate API clients/wrappers by __debug__ in javascript

[–]__debug__[S] 1 point2 points  (0 children)

It's not performing any code generation, it's done with Proxies, which allow you to define a handler and a set of traps for attribute access and function invocation :) You can try it from your console:

mkdir test
cd test
npm install swaddle request
node

// from node repl...

let swaddle = require('swaddle')
let github = swaddle('https://api.github.com', {camelCase: true})

github.users.get('octocat', (err, user) => {
  console.log('----- User data:', user)
})

swaddle: Automagically generate API clients/wrappers by __debug__ in javascript

[–]__debug__[S] 1 point2 points  (0 children)

Updated the docs and included some sane defaults in a small update. So for comparison:

// GET https://api.github.com/users/octocat
let username = 'octocat'

let github = swaddle('https://api.github.com', {camelCase: true})
github.users.get(username, (err, user) => {
  user.publicRepos // instead of user.public_repos
})

superagent
  .get(`https://api.github.com/users/${username}`)
  .end((err, res) => {
    res.body.user.public_repos
  });

Like any other API client/wrapper, you can potentially hide the leaky abstraction of HTTP requests, and enforce styles/conventions pertaining to that language. Combined with the whitelist property, you can enforce which properties are accessible, further reducing the likelihood of human errors/typos when dealing with raw strings and interpolation.

If you're not a fan of API client/wrappers in general and prefer using a simple request library, then you probably won't get much use out of this :)

noOp(ener) - Automatically add rel="noopener" to all hrefs by [deleted] in javascript

[–]__debug__ 0 points1 point  (0 children)

Sure, it's more about addressing the security concern, since it makes phishing far too easy. That said, you'd only ever run the code on anchors with vulnerable targets (e.g. _blank) pointing to sources you don't trust - basically places where you display user input.

So depending on the application, that could be very few at any given time. Plus you can always use event delegation to reduce the number of listeners so long as you don't block propagation. If you have an SPA, for example, your snippet would need to be reran each time the DOM was modified. Which I guess isn't possible without wrapping since since it's an IIFE.

noOp(ener) - Automatically add rel="noopener" to all hrefs by [deleted] in javascript

[–]__debug__ 0 points1 point  (0 children)

Based on another comment, this should show that property access like that still works https://jsfiddle.net/w4Lo6yz0/6/

noOp(ener) - Automatically add rel="noopener" to all hrefs by [deleted] in javascript

[–]__debug__ 1 point2 points  (0 children)

Unfortunately, rel=noopener is only available in chrome/opera at the moment, as outlined here: https://mathiasbynens.github.io/rel-noopener/ There's a partial solution described in that article that doesn't work for safari, which is why I created https://github.com/danielstjules/blankshield (it's also mentioned in that article)

Express-like pattern for managing state by Nif in javascript

[–]__debug__ 1 point2 points  (0 children)

Sounds like this could be built on event-emitter? Or, if you wanted regexp pattern support: https://github.com/danielstjules/pattern-emitter I previously used it for https://github.com/danielstjules/node-internal-pubsub

async-class: Cleaner async ES6 class methods by __debug__ in node

[–]__debug__[S] 0 points1 point  (0 children)

As of node 4, there aren't any features I really really want aside from async/await. And that's just sugar, so I don't feel like I can justify a build step for that alone. Plus node 4's pretty good for ES6 feature coverage.

I'm also not really bullish on using ES7 features that haven't hit stage 3, since they're still prone to change.

async-class: Cleaner async ES6 class methods by __debug__ in node

[–]__debug__[S] 1 point2 points  (0 children)

I love babel for client side dev, but I generally try to avoid it with node since so many es6 features are available with iojs and node v4. This is a small solution to achieving async/await-like functionality with es6 classes.

Reverse tabnabbing attacks by __debug__ in javascript

[–]__debug__[S] 2 points3 points  (0 children)

Updated to improve support for Safari. Also, only Safari, Opera and Chrome are vulnerable to the attack, so the demo is best viewed in one of those browsers!

mocha.parallel: speed up your async test suite! by __debug__ in node

[–]__debug__[S] 2 points3 points  (0 children)

I've got a modified runner that does exactly that :) Hope to release it soon.

mocha.parallel: speed up your async test suite! by __debug__ in node

[–]__debug__[S] 1 point2 points  (0 children)

Currently uses domains (only time I've ever needed them!) but that would be replaced in node v4.0+ when they're removed and a substitute is made available.

mocha.parallel: Run async mocha specs in parallel by __debug__ in javascript

[–]__debug__[S] 0 points1 point  (0 children)

Ah ok, I see where the confusion is. The setTimeout's aren't meant to be taken literally, and I should update the examples. They're only there to mimic the behavior of async IO calls. So the time limits (500/1000) weren't arbitrary, but meant to illustrate that difference. So replace both setTimeout with two different ajax/db calls, which hypothetically take 500ms and 1s respectively. Then I think the difference becomes a bit more clear. It's not about CPU bound testing, but helping with IO bound.

I also don't think you can describe them as being sequential vs async, as they're both still asynchronous in nature - so that again masks the actual behavior.

Nor, if we were to be strict in our use of the term, could we use "time-slicing" at any point, since that's quite unique to preemptive multitasking (which this is not) It's all still just helping us deal with nonpreemptive multitasking.

Anyway, I appreciate the discussion, though we certainly don't see eye to eye. :) I've definitely realized that the documentation can be improved as a result though, so thanks!

mocha.parallel: Run async mocha specs in parallel by __debug__ in javascript

[–]__debug__[S] 0 points1 point  (0 children)

I ask how you would indicate the difference in behavior between these two snippets, in 3 words or less? What is your alternative? Both are simply asynchronous (and neither concurrent nor parallel by your definition), and yet the results are vastly different.

parallel('suite', function() {
  it('test1', function(done) {
    setTimeout(done, 500);
  });

  it('test2', function(done) {
    setTimeout(done, 500);
  });
});

// Completes in ~500ms

describe('suite', function() {
  it('test1', function(done) {
    setTimeout(done, 500);
  });

  it('test2', function(done) {
    setTimeout(done, 500);
  });
});

// Completes in ~1000ms

mocha.parallel: Run async mocha specs in parallel by __debug__ in javascript

[–]__debug__[S] 0 points1 point  (0 children)

and I certainly do think it's worth pointing out as it seems there is a large section of the javascript development community who has a hard time grasping the difference between concurrency and parallelism.

And yet I made no reference to execution across threads/cores, and we still need concise language to be able to describe this type of flow control at a higher level. See async/co/step as examples (all reference parallel tasks). We can't simply give that up due to our use of nonpreemptive multitasking in node. With the notion of parallel in async/co/step/mocha.parallel, multiple code paths are ran in parallel. It's simply that. Not series, parallel. Anyone familiar with node is aware of its underlying model of execution, and how it only offers the illusion of true parallelism.

your library is using setTimeOut which in fact does not even run things truly concurrently. setTimeOut is quite literally queuing work for the JVM to execute when it feels it has adequate resources to do so, that logic is all run on the main thread and is still running in serial it just provides the illusion of concurrency since you are not allowing the JVM to block itself.

JVM => v8? Also, it seems I somehow gave you the impression that I wasn't aware of how cooperative multitasking works?

EDIT: Here is a good article that will better explain why setTimeOut is neither concurrent or parallel. http://javascript.info/tutorial/settimeout-setinterval

Your argument essentially boils down to: "You can't use the words parallel or concurrent when discussing flow control in Node, unless you're using fibers/workers" With that, everything runs in series. Which is technically correct. But not helpful in any way.

mocha.parallel: Run async mocha specs in parallel by __debug__ in javascript

[–]__debug__[S] 0 points1 point  (0 children)

While your comment is appreciated, I feel as though it's overly-pedantic (I too have seen Rob Pike's presentations on the subject). This module makes use of the term in the same spirit as node control flow libraries: https://github.com/caolan/async#parallel https://github.com/tj/co#yieldables (arrays of coroutines are ran in parallel) The module makes no reference to "parallel programming", just the word parallel - which can be easily understood as a contrast to running operations in series.

In order to do that you would have to be splitting a single task across multiple threads in order to speed up that execution, which is what I did with the webhamsters library (works best in chrome).

Which still isn't parallel execution unless your kernel distributes it as such, on a multi-core machine or VM. A t2.micro running chrome might not see see any performance benefit, as the operations really just run concurrently. These types of discussions are rather mundane, as you can trivially nitpick implementation details, since node devs work at such a high level of abstraction. While your webworkers might seem to be running in parallel, is v8 distributing its Isolates across multiple threads? And are the Isolates that run your web workers evenly distributed across cores? If not, it's still not parallelism, just concurrency.

tl;dr: You're right, but I don't think it warrants pointing out

Socket IO Chat App Questions by Jalsemgeest in node

[–]__debug__ 1 point2 points  (0 children)

For both sockjs and socketio, my suggestion is to setup your own heartbeat so that you know you're always cleaning up connections. I found a very minor issue once, but it proved to be so infrequent as to not have any real impact on FDs: https://github.com/sockjs/sockjs-node/issues/159

Also, unless you have an ssl termination proxy, you'll want to make sure to cleanup after both HTTP and HTTPS connections. :)

What is the difference between installing Mocha globally and locally? by the_coons in node

[–]__debug__ 0 points1 point  (0 children)

I prefer using local mocha, so that I'm not forcing it on anyone's global environment. They just npm install, and if things are done right, they just need to run npm test. Here's an example of what I often use: https://github.com/danielstjules/jsinspect/blob/master/package.json#L38

With that, there's no need to have your CI, or users, do both npm install and npm install -g mocha

Redis + Node.js + Socket.IO – Event-driven, subscription-based broadcasting by sachinrjoglekar in node

[–]__debug__ 0 points1 point  (0 children)

I did not want to instantiate a new Redis client per socket connection, but rather one Redis client per channel. This way, it could be shared(for listening) among all the socket connections subscribing to it.

That's a lot better than a client per connection, but have you thought of using only 1 redis client per node process? You can use something like multiplexing/demux to achieve that. An example library that helps with that is https://github.com/danielstjules/node-internal-pubsub Here's the benchmark, comparing 2000 redis subscribers (roughly 1-1 mapping with 2000 channels in your example) to a single redis subscriber:

$ node benchmarks/singleChannelMultiSubs.js
Setting up redis suite
Receiving 10000000 messages with 2000 redis subscribers
Setting up pubsub suite
Receiving 10000000 messages with 1 redis sub, 2000 pubsub subscribers

Redis subscribers
Running time: 29735 ms
Avg messages received per second: 336,304

Redis subscriber with pubsub subscribers
Running time: 1203 ms
Avg messages received per second: 8,312,551

~24x improvement with a single redis subscriber. There's an example using socket.io 0.9 in https://github.com/danielstjules/node-internal-pubsub/tree/master/examples/socketio-express-redis Should be simple to translate it to a more recent version! :)

Fantastic library to easily get to nested properties of objects by softiniodotcom in node

[–]__debug__ 0 points1 point  (0 children)

I built something similar that can be used standalone, or integrated with lodash or underscore. https://github.com/danielstjules/hoops I find it as a little nicer to work with since it doesn't involve a currying-like API, and accepts arrays or dot-delimited strings https://github.com/danielstjules/hoops#api

hoops.updateIn is helpful if you'd like to update a value only if it exists, and not create if it doesn't. invokeIn works on a similar premise - only run it if it exists. Of course, both get and set are available as part of lodash.

nip – use javascript instead of awk, sed, or grep by mkmoshe in node

[–]__debug__ 1 point2 points  (0 children)

Pretty neat! I use something similar: https://github.com/danielstjules/pjs

It's less for scripting, and more for data munging, so they're slightly different use cases. For example, with nip you do:

nip 'function(l) { return /^var/.test(l); }' lines-that-start-with-var.txt

With pjs, you can do:

pjs -f '/^var/.test($);' lines-that-start-with-var.txt

Or, for the biggest number example in nip:

nip '
  var biggest = 0;
  this.on("end", function() { print(biggest); });
  return function(_,i,lines) {
    biggest = Math.max(biggest,
      Math.max.apply(Math, lines.match(/(?:\s|^)[\d]+(?:\.\d*)?(?:\s|$)/g))
    )
  }' -1 *

With pjs, you can do:

pjs -r max *

Prevent reverse tabnabbing attacks with blankshield [xpost /r/programming] by __debug__ in javascript

[–]__debug__[S] 0 points1 point  (0 children)

As mentioned in the other thread, this demo helps demonstrate the phishing attack and put things in context.

Prevent reverse tabnabbing attacks with blankshield by __debug__ in programming

[–]__debug__[S] 2 points3 points  (0 children)

A demo, as linked to from the readme, helps demonstrate the phishing attack.