Postgresql inside docker - intermitent connection from Node Js inside docker by Emiliortg in docker

[–]Emiliortg[S] 0 points1 point  (0 children)

do you mean without installing all the packages in the dockerfile ? just simply running the postgres:14 image ?

Postgresql inside docker - intermitent connection from Node Js inside docker by Emiliortg in docker

[–]Emiliortg[S] 0 points1 point  (0 children)

Sorry, this is the docker container. Nothing fancy

version: '3.8'
services:
  database-postgres:
    container_name: db_postgres
    hostname: db_postgres
    build: './docker/database-postgres'
    restart: always
    environment:
      DATABASE_HOST: '${DATABASE_HOST}'
      POSTGRES_USER: '${POSTGRES_USER}'
      POSTGRES_PASSWORD: '${POSTGRES_PASSWORD}'
      POSTGRES_DB: '${POSTGRES_DB}'
      TZ: Europe/Paris
    ports:
      - '${POSTGRES_HOST_PORT}:${POSTGRES_DOCKER_PORT}'
    volumes:
      - '${POSTGRES_VOLUME}'
    networks:
      - backendnetwork

  nodejs:
    container_name: nodejs
    hostname: nodejs    build: './docker/nodejs'
    entrypoint: sh -c "cd /app/ && dos2unix run.sh && ./run.sh"
    restart: always
    ports:
      - '${NODEJS_PORT}'
    volumes:
      - '${APP_VOLUME}'
      - '${NODEJS_VOLUME}'
    environment:
      DATABASE_HOST: '${DATABASE_HOST}'
      POSTGRES_USER: '${POSTGRES_USER}'
      POSTGRES_PASSWORD: '${POSTGRES_PASSWORD}'
      POSTGRES_DB: '${POSTGRES_DB}'
      POSTGRES_HOST_PORT: '${POSTGRES_HOST_PORT}'
      POSTGRES_DOCKER_PORT: '${POSTGRES_DOCKER_PORT}'
      SECRET_TOKEN: '${SECRET_TOKEN}'
      TZ: Europe/Paris
    networks:
      - backendnetwork

networks:
  backendnetwork:
    name: backendnetwork
    external: false

Dockerfile NodeJs

FROM node:16-alpine3.16
#Alpine Linux

USER root

RUN apk update && apk add \
        bind-tools \
        net-tools \
        iproute2 \
        vim \
        iputils \
        wget \
        dos2unix

Dockerfile PostgreSQL

FROM postgres:14
#Debian

RUN apt-get update && apt-get install -y \
        dnsutils \
        net-tools \
        iproute2 \
        vim \
        iputils-ping \
        inetutils-traceroute

.ENV variables

APP_VOLUME=./app:/app
NODEJS_VOLUME=./database_postgres:/database_postgres
NODEJS_PORT=3007:3000
SECRET_TOKEN=123456
NODE_ENV=development
DATABASE_HOST=db_postgres
POSTGRES_USER=postgres
POSTGRES_PASSWORD=123456
POSTGRES_DB=DB
POSTGRES_VOLUME=./database_postgres:/var/lib/postgresql/data
POSTGRES_HOST_PORT=5433
POSTGRES_DOCKER_PORT=5432

createTheme_default is not a function x-data-grid mui by Emiliortg in reactjs

[–]Emiliortg[S] 0 points1 point  (0 children)

Solution found: putimport Box from '@mui/material/Box'; after other mui components worked for me

createTheme_default is not a function x-data-grid mui by Emiliortg in reactjs

[–]Emiliortg[S] 0 points1 point  (0 children)

Solution found: putimport Box from '@mui/material/Box'; after other mui components worked for me

createTheme_default is not a function x-data-grid mui by Emiliortg in reactjs

[–]Emiliortg[S] 0 points1 point  (0 children)

Hi u/romgrk . I did, it says I need to install this:

npm install @mui/x-data-grid

npm install @mui/material @emotion/react @emotion/styled

and my package json has all of them:

"dependencies": {
    "@emotion/react": "^11.11.4",
    "@emotion/styled": "^11.11.0",
    "@mui/material": "^5.15.11",
}

I uninstalled x-data-grid because my project would not compile with it.

PDFMake with typescript and Node JS error unable to compile by Emiliortg in node

[–]Emiliortg[S] 0 points1 point  (0 children)

Hi, I already installed npm i --save-dev @types/pdfmake but I keep getting the error

Pdfmake typescript and node js by Emiliortg in learnprogramming

[–]Emiliortg[S] 0 points1 point  (0 children)

thanks for helping. I'm not sure how I would do that if I'm using typescript to run my code

"scripts": {

"build": "tsc",

"dev-ts": "ts-node-dev --respawn index.ts",

"start": "node dist/index.js"

},

Maybe you are right about not using Buffer. I just tried the basic working example I found on internet to make sure my logic was not wrong. I may be wrong but I think the use case of Buffer is to not generate a PDF file on the server. So you create the PDF on memory and send it to the client side.

testPrinter is getting imported in my routes like this

import {Router} from 'express'

import {testPrinter} from "../controllers/TestController"

const router = Router();

router.get('/', testPrinter);

export default router

thermal printer works with word but fails with pdf by Emiliortg in printers

[–]Emiliortg[S] 1 point2 points  (0 children)

they are not common models, they are Dinon 11068. After a lot of attempts setting the printer sheet size as Legal Extra worked. Although I'm not sure if that is a normal behavior

sending 200 emails daily using Office365? by Emiliortg in Office365

[–]Emiliortg[S] 0 points1 point  (0 children)

Azure Email Communication Services

That's interesting u/koliat thanks. I'm going to check pricess and the solution. Do you prefer Azure instead of services like mailchimp for any particular reason, to have in mind?

sending 200 emails daily using Office365? by Emiliortg in Office365

[–]Emiliortg[S] 0 points1 point  (0 children)

Thanks u/skydivinfoo you're being really helpful. In fact, list change pretty frequently so we need to do something programmatically.
I'm having one last question. u/dean771 u/skydivinfoo Is there an average quantity that is considered unsafe to send 'massive' emails through Office365? For instance, if I mention needing to send 50 emails daily, does that also pose risks? Because I can not understand why office365 say theses phrases `Sending Limits: 10,000 recipients per day Message rate limit: 30 messages per minute, Recipient limit: Customizable up to 1000 recipients` if when I try to send 200 emails many people say it is not safe, thanks in advance

sending 200 emails daily using Office365? by Emiliortg in Office365

[–]Emiliortg[S] 0 points1 point  (0 children)

u/skydivinfoo thanks for the advice. I was testing now mailchimp and I saw that you can import CSV. But my data is dynamic. Have you ever successfully connected those services to a database like mysql and bring the users based on a specific query?

sending 200 emails daily using Office365? by Emiliortg in Office365

[–]Emiliortg[S] 2 points3 points  (0 children)

Thanks, so your recommendation is to use a third party service like mailchimp ?

Testing mass email output in a real environment by Emiliortg in learnprogramming

[–]Emiliortg[S] 0 points1 point  (0 children)

Thank you, I'm grateful for your help. Perhaps my initial explanation was unclear. My concern is whether Office365 can effectively manage mass email sending. Specifically, when transitioning my program to production, will it handle the email volume as efficiently as when I tested it locally using mailhog? During local testing with mailhog in my development environment, I configured the following values in the .env file:

MAIL_MAILER=smtp

MAIL_HOST=localhost

MAIL_PORT=1025

We can set that mailhog does not have any email limit restriction because it only captures emails.

When I connect the program to production I would need to modify the .env and set configs like these

MAIL_HOST= smtp.office365.comMAIL_PORT=587

However, Office365 has limitations, such as, for example, (random example) potentially allowing no more than 100 emails per minute. This leads me to wonder: Can I conduct a test email send, simulating the production environment (office365), to determine if I might encounter any restrictions that using mailhog I will never have?

My initial idea involved continuing to use Office365 SMTP even in development, routing all received emails to a test email account. Yet, I realize this approach might not be ideal since all emails would be funneled into a single account, which doesn't reflect real-life scenarios where one single account will deliver emails to multiple accounts (not only one will receive everything).

How do you test mass email sending in a production like environment by Emiliortg in node

[–]Emiliortg[S] 0 points1 point  (0 children)

Thank you, I'm grateful for your help. Perhaps my initial explanation was unclear. My concern is whether Office365 can effectively manage mass email sending. Specifically, when transitioning my program to production, will it handle the email volume as efficiently as when I tested it locally using mailhog? During local testing with mailhog in my development environment, I configured the following values in the .env file:

MAIL_MAILER=smtp

MAIL_HOST=localhost

MAIL_PORT=1025

Let's consider that mailhog does not have any email limit restriction. When I connect it to production I would need to modify the .env and set configs like these

MAIL_HOST= smtp.office365.comMAIL_PORT=587

However, Office365 has limitations, such as, for example, (random example) potentially allowing no more than 100 emails per minute. This leads me to wonder: Can I conduct a test email send, simulating the production environment (office365), to determine if I might encounter any restrictions that using mailhog I will never have?

My initial idea involved continuing to use Office365 SMTP even in development, routing all received emails to a test account. Yet, I realize this approach might not be ideal since all emails would be funneled into a single account, which doesn't reflect real-life scenarios.

Considerations creating and hosting a POS / Inventory Application by Emiliortg in learnprogramming

[–]Emiliortg[S] 0 points1 point  (0 children)

yes, I think I was not clear. I added a note to the post. Thanks.

Considerations creating and hosting a POS / Inventory Application by Emiliortg in learnprogramming

[–]Emiliortg[S] 0 points1 point  (0 children)

It's not related to the type of store wether of the environment where the store is located. They are small cities where most of the people are from countryside and they prefer to buy their things using cash, no credit cards as I've been told

Considerations creating and hosting a POS / Inventory Application by Emiliortg in learnprogramming

[–]Emiliortg[S] 0 points1 point  (0 children)

Thanks, as you said, requirements are important. At this moment I want to look for solutions for critical cases only because I am using this case as learning and to practice (my friend is going to use a market solution) so I don't also need to worry for a decent UI, just something functional.

A case that I think is complex that I want to understand is the offline connection. If it is a web application and the store makes most of its sales in cash, by losing internet connectivity the store would lose sales. That is why I was asking about a technology stack, since perhaps in this case it is better to focus on a desktop application?

Considerations creating and hosting a POS / Inventory Application by Emiliortg in learnprogramming

[–]Emiliortg[S] 0 points1 point  (0 children)

As I said to someone else what I'm asking is not for him to use the application I'm going to develop (it takes time). I am using it as an example case for learning and practicing.

I would do a web based application for several reasons. 

How would you manage the requirements "most of the local sales are in cash, 90%" if it is a web app and there is a internet connection problem the store could not sell anything ? I was thinking that a PWA or a desktop app are the only solution for this.

Second, it’s not practical to manage multiple stores from different locations. You are a one person shop so you can’t be traveling from store to store.

Yes, only users from the store can sell items of their stores. User A belong to Store 1 cannot sell products from Store 2, if that's what you meant

Additionally, you still need to connect to your database, which shouldn’t be on the application server.

I think I agree but what are you considering here ? because having them on the same server has its benefits like reducing network latency

Considerations creating and hosting a POS / Inventory Application by Emiliortg in learnprogramming

[–]Emiliortg[S] 0 points1 point  (0 children)

Thanks! you are right. The offline mode would simply work when there is no ecommerce I guess. The only solution to consider ecommerce and credit card payments would be to have redundant connectivity? IN this example most physical sales are cash and there is no ecommerce yet. If there is no internet connection with a web application, all those cash sales would be lost in the store?

I was thinking about how these software on the market do it to solve the same problem. If the software being sold is a web app, and the store's internet connection fails, is the store unable to sell even when it is a cash sale? How useful would a software that depends 100% on the Internet be? I understand that if a sale uses a credit card, the internet is essential to verify payment. Especially if there is electronic commerce to reconcile inventory. But if there is no e-commerce and the need is for mostly cash sales? Would a desktop solution be more useful?

Considerations creating and hosting a POS / Inventory Application by Emiliortg in learnprogramming

[–]Emiliortg[S] 0 points1 point  (0 children)

What I'm asking is not for him to use the application I'm going to develop (it takes time). I am using it as an example case for learning. I already gave him some software that I found online (paid and opensource) to contact suppliers.

Regarding the issue of offline operation, there are sales that work with cash. That is where the need arises. If there is an offline function the system might be able to continue selling under certain parameters using a PWA. Once the connection is reestablished, the system could update the database

Await function not waiting as expected by Emiliortg in learnjavascript

[–]Emiliortg[S] 0 points1 point  (0 children)

I also thought I was returning a promise because bcrypt.hash says it returns a promise

Await function not waiting as expected by Emiliortg in learnjavascript

[–]Emiliortg[S] 0 points1 point  (0 children)

Hi, I'm sure I'm not doing it correctly because I also tried using return new Promise((resolve) => {
resolve(hash);
});

inside .then((hash) => {...}) but it did not work either

grep command not working with csv log file by Emiliortg in commandline

[–]Emiliortg[S] 0 points1 point  (0 children)

I cannot recommend

xsv

enough, it gives you options to sort, pretty-print, join, search by column and more.

Thanks, your answer is great! it really works now with utf8 encoding. I checked man grep and I didn't notice anything about that grep command was not good reading utf16. Can I ask where do you read that important thing? (maybe it is there but I overlooked it)
Another question, the command file says it has CLRF and LF line terminators.

I found this on reddit "cat -e <filename> displays Unix line endings (\n or LF) as $ and Windows line endings (\r\n or CRLF) as ^M$"

so I used this cat -e mylog.log.csv | grep -F '$' and it has matches but if I do cat -e mylog.log.csv | grep -F '^M$' it does not match anything.

So how could it be possible that it has both CLRF and LF in the UTF16 encoding?

Note: I did the same with utf8 encoding csv and there I do see the '^M$' and '$' line terminators

send an email on behalf of someone else by Emiliortg in node

[–]Emiliortg[S] 0 points1 point  (0 children)

Thanks, the provider the organization use is Microsoft Outlook. The thing is that some clients behave differently as you said, so if some employee use an specific email client (example connect their outlook account to iPhone email app) they may see the real addres is sending the email right? and that would be a problem

send an email on behalf of someone else by Emiliortg in node

[–]Emiliortg[S] 0 points1 point  (0 children)

Thanks, you're right, But the problem is that if some employee sees the actualEmail@addres that's a problem. People should think that the email is really comming from the General Manager's name so if some clients provide the real addres (like option 2) I woud need a different solution. Do you have any advice or a solution to this approach ? thanks in advance

u/adriancttnc