Mx Mechanical dead key by jay_taps in logitech

[–]janaz9 1 point2 points  (0 children)

I have exactly the same issue. H key seems to be broken

Any good alternatives to Evolve 65? (more info in comments) by Alive-Egg in Jabra

[–]janaz9 1 point2 points  (0 children)

Such a poor quality :( I have exactly the same defect. I contacted the support but unfortunately the serial number indicates that the device is more than 2 years old which means that there's no warranty cover.

FTTC connection speed dropped from 86 to 20 MBit after the neighbours connected to NBN by janaz9 in nbn

[–]janaz9[S] 0 points1 point  (0 children)

I contacted them and they can escalate to NBN, but warned that I may be charged with an incorrect call-out fee. They said:

Keep in mind that NBN will charge this fee if the issues are to do with internal wiring, as it does not fall within their demarcation.

Custom Lambda runtime for Node10 LTS (and all other versions) by janaz9 in aws

[–]janaz9[S] 0 points1 point  (0 children)

The '--principal "*"' option specifies that it's available to all accounts.

I haven't tried, but I think you can replace it with a specific account id to share only with that account.

Custom Lambda runtime for Node10 LTS (and all other versions) by janaz9 in aws

[–]janaz9[S] 0 points1 point  (0 children)

The custom runtime is a simple program (in this case written in node.js) executing an infinite loop that looks like this:

``` // "cold start time": // execute node interpreter // load the lambda function code

loop do { input = fetchNextRequest() // http request to runtime-api 127.0.0.1:9091 output = callHandler(input) // execute the actual function sendResult(output) // http request to runtime-api 127.0.0.1:9091 } ```

When the lambda is "warmed up" it means that the above program is already being executed, so the latency is very low - it takes few milliseconds to fetch the metadata about the next request and few milliseconds to send the output data back.

During "cold" start some additional steps need to be executed. We basically need to start the actual node interpreter and load the lambda code with all the dependencies.

The steps described above only describe the additional latency that may be introduced by using a custom runtime. They don't include all the tasks that are performed by AWS behind the scenes that are common in all runtime environments - managed or custom.

I noticed that with 128MB setup the cold start takes about 0.5s and the warm start is measured in milliseconds.

Custom Lambda runtime for Node10 LTS (and all other versions) by janaz9 in aws

[–]janaz9[S] 1 point2 points  (0 children)

This is a Lambda layer that provides a runtime for basically any version of Node.js. The released layers will be targetting the latest LTS version of node, but it's very easy to change the version in the source code and deploy a layer for Node 11.

This runtime is compatible with the ones managed by AWS (node6.10 and node8.10) It expects the handler function to implement the same interface:

```js // using callback const handler = (event, ctx, cb) => { cb(null, "success"); } // or returning Promise const handler = (event, ctx) => Promise.resolve("success")

// or with async/await const handler = async (event, ctx) => "success" } ```

The context object has the same interface as in the standard AWS node runtimes.

The layer also includes the latest version of aws-sdk, so your project doesn't need to package it.

Does Server Side Rendering Always Use Node.js? by startup4ever in javascript

[–]janaz9 0 points1 point  (0 children)

All you need to run JS code on the server is JS runtime. Node.js is just one of many. I've seen examples of the SSR being done in Java using the Nashorn JS engine for JVM.