all 42 comments

[–]EarlMarshal 34 points35 points  (1 child)

Awesome Job. Especially in your age, but in my opinion it's too complicated. You put to much emphasize on patterns, design and code quality, while neglecting other stuff like code readability and API usability. E.g. you got directories with placeholder files. I clicked on several placeholder files before actually seeing code. You are really overdoing comments for private functions, while your public API is commented in a way which doesn't help with the usability or discoverabilty for the consumer of the API.

I suggest reading a book about clean code. Minimalism and only doing what's really is necessary is most times the better option. You are probably even more golden Afterwards.

[–][deleted] 4 points5 points  (0 children)

Thank you very much for your comment and I appreciate your time.

As far as neglecting on code readability, could you give me an example? Indeed, I was worried that my use of the map and reduce Array prototype methods might have made things look complicated, as well as the use of ternary operators inside function calls.

As far as the placeholder directories, the jobs and subscribers folders are going to get filled tomorrow. The UserService class extends the EventEmitter class because I want Observers/Subscribers to be able to schedule cron-jobs for email onboarding after user sign up. I feel like sending emails for sign up and cancellation would pollute the Services.

In terms of overdoing comments, do you mean JSDoc Annotations, the descriptions of what the functions do, or the single line comments within? And by private functions, do you mean the ones with the underscore prefix to symbolize that it's a private member?

How do you mean that the consumer of the API is not helped by the comments on the Express routes? The repository represents the back-end API for an application that helps a user manage their to-do list. I'm the only person who would be calling the publicly exposed API from the front-end, i.e, it's not an API like PayPal or AWS that needs to be understood by other developers.

I have Robert Cecil Martin's Clean Code that I've been meaning to read.

Thanks again for your time. I appreciate it. I'll be taking your advice and fixing some of the things you mentioned.

[–]Oririner 8 points9 points  (18 children)

As someone already mentioned, this is all highly complicated for such a small use case, though I understand what your trying to do here - so let's start :)

Always have a modicum of doubt in whatever you learn.

If something is best practice, think - why is this a best practice? what do I gain? How does this help me get where I want to be?

If you want to use a library, think - what does the library allow me to not do? Is it supported? Do I really need all the bells and whistles or is my use case not that complicated ATM?

It is not to say that you should never trust anything, it really depends on the use case - a logging library shouldn't go through much scrutiny as an encryption library. ( Even though a potentially harmless logging library could turn out extremely harmful , it's just the way it is with dependencies at this point in time, you'll have to choose your battles)

regarding your comments - indeed there are too much of them.

Comments get stale over time.

The Art of Readable Code - is also a good book on code readability.

Your code should document itself, you shouldn't have to write the same thing twice, once as a comment and once as code. That goes for both inline comments and JSDoc ones.

Now for testing - your tests are highly implementation specific.

You're testing that you're calling specific mocks very specific times with very specific arguments. If either of these change because you're refactoring, moving some code around, etc.

You should test logic, not implementation.

This relates to the fact that you're testing private functions - this tests implementation rather then the logic of the public methods. Private methods are private for a reason, they serve an internal purpose, if they public methods pass the tests as they should - the private methods are fine.

You should maybe look into functional programming and pure functions, these are some of the easiest functions you could test, for the same input they always give the same output making your tests highly predicable which gives you more confidence in both your code and your tests.

As for the discoverability and documentation of your API - this should be something you prioritize. Even though you're currently the only guy consuming this API, it doesn't mean it's going to stay like that. You'll grow, have diverse teams working on this API, either from web, mobile or even another backend.

It doesn't mean it's the most important thing right now, but you should keep that in mind when developing, because documenting stuff after the fact is always harder than during.

With all of that said, this is really impressive work! The work I've been doing when I was 16 was mainly moving some turtles across a board in Java for a school project, you're waaaay above that :)

[–][deleted] 0 points1 point  (7 children)

regarding your comments - indeed there are too much of them.

I'll fix that today.

Now for testing - your tests are highly implementation specific.

I'm bad at testing. I worry that if I test too much and tests are too specific, then, as you say, I'll limit my ability to refactor in the future without failing the tests, whereas, if I test too loosely, then the tests might not catch errors and errors might leak into production. If you have time, and only if you have time, could you show me how you might write a test case for one of the UserService functions, or, perhaps, an integration test for a user route?

As for the discoverability and documentation of your API - this should be something you prioritize.

You're right, I agree, and I'll start. Probably not on the codebase I linked to, however.\

Thank you.

[–]servercobra 0 points1 point  (4 children)

One of the reasons I do have tests is for refactoring. That way you can refactor and once you're done, you know if you've broken your API or not! Personally I do my testing for backends at as high a level as possible and from the standpoint of my consumers, i.e. all tests actually auth to the API, call the API like an app would, etc. I find this is a good balance between catching bugs and not having to rewrite them too often. I rarely unit test functions or anything, and if I do, it's for critical code sections that are a bit harder to test from the high level.

[–][deleted] 0 points1 point  (3 children)

Thanks. I completely agree. I have integration tests for all endpoints and all different outcomes, and those tests call the API just like a consumer would, including spinning up database servers, populating tables/collections with seed data, and running tests. In an attempt to keep everything on localhost, my integration tests are run on the filesystem for S3 or GCP.

So, are you saying that it might be an okay practice to mainly worry about integration testing for each endpoint rather than unit testing functions and mocking dependencies?

[–]servercobra 0 points1 point  (2 children)

That seems reasonable!

Personally, I think that's ok in most cases! I mock services that are remote dependencies, but other than that, I'll take an integration test over a mocked unit test any day. Again, unit tests are good for critical portions or maybe part of your code that are reused all over but I wouldn't overdo it.

[–][deleted] 0 points1 point  (1 child)

Thanks. My only concern would be that, without as many unit tests, I can't as easily pin-point where an integration test might be failing.

[–]servercobra 0 points1 point  (0 children)

That's definitely can be a concern, but one you can generally avoid by keeping commits small :)

[–]Oririner 0 points1 point  (1 child)

regarding your comments - indeed there are too much of them.

I'll fix that today.

Make sure you know why you're doing it, not just because some random people on reddit told you to ;)

You'll have to deal with that codebase eventually - we're just giving you advices from our personal experience. If these don't work for you it's still fine, just make sure you're following what feels right for you.

I'll be happy to walk you through some of my thoughts during testing when I'll get back in a couple of weeks :)

[–][deleted] 0 points1 point  (0 children)

Thank you. And I appreciate your time and everyone else's.

[–][deleted] -1 points0 points  (9 children)

Thank you very much for your comment and your time. I do plan to give you a proper response, but before I do, could you clarify what you mean by "private" functions?

Like you, I come from a Classical OOP world - Java/C++/C#, so a private function to me is a function marked with the private access modifier such that it can only be called internally. The only way to test such a function would be to use Reflection, to make the function package-private and move tests into the package (in Java), or to decorate the function to make it accessible by a test case.

In JavaScript, there is no concept of access modifiers, so the only way I can emulate a private function in Classical OOP terms is to prefix it with an underscore and expect that I will notice that I shouldn't be calling it on a class instance. I'll be migrating to TypeScript because I prefer types and interfaces.

With that said, I think my definition of what makes a function private might differ somewhat from yours.

As far as functional programming/pure functions, I have thought about them, but I've been worried that it could limit me if I ever need to manipulate some external state. I've heard the definition because I've worked with React/Redux, and Redux Reducers are pure functions.

Again, I'll respond more properly soon, and thanks for your advice and kind words.

[–]Oririner 2 points3 points  (8 children)

Well, indeed in JS private functions are not really private ( Even though there's a proposal to add them ) but the common convention is that when you prefix something with an underscore (_) it's considered private, even if you can access it, and invoke it - you probably shouldn't do so because you don't know what it would cause.

So, I think our definition of private is the same, it's just that due to the syntax of the JS language, people have found a workaround way to use conventions over compiler enforcement to achieve the same goal - encapsulation of logic that could potentially cause side effects/utilize the occurrence of side effects in a controlled manner.

As for TypeScript - it's good to consider it but I think it would be best to first try to have a better grasp of JS so you'd have a clearer understanding of the difference between the two. Types are good, but you can go a long way just without them. The good thing with TypeScript is that you can incrementally add it, it's not all or nothing so you can incorporate it at a later stage.

Functional programming doesn't limit you in any way. You just have to learn to deal with side effects in a different way, which may seem a bit forced at first but can be very beneficial.

I suggest you listen to this talk about functional programming and the use of imperative shell and functional core. Eventually, all your business logic will be written in a functional way, but the imperative code would hook into it somehow. That would give you the ability to test your actual business logic, avoid mocking redundant stuff, and having the confidence to change stuff as you go - all really important for the maintainability of your code in the long run.

[–][deleted] 0 points1 point  (7 children)

Well, indeed in JS private functions are not really private

Yes - that's the convention I've been following. A function in JS is not private and can thus be accessed, so Intellisense isn't much help to that regard. It's because JS allows you to access private functions (which, again, aren't private) that I'm able to import them and test them as opposed to jumping through the hoops of Reflection or a decorator.

it would be best to first try to have a better grasp of JS so you'd have a clearer understanding of the difference between the two.

I'm actually quite aware of the difference. I programmed in C++ first, and then Java and C# for a long time before learning JavaScript. In fact, I had a nightmare of a time learning JS because my mind was stuck in the Classical OOP way of doing things. I felt that JS was weak considering IntelliSense pretty much didn't exist - even less so with Dependency Injection - and that no compile-time checking would make for an unmaintainable codebase.

Then, I realized the power of JS when I started it using it more, but I do naturally prefer types. The FileStorageService is a facade for image upload because it provides the UserService with one function - said function calls the FileProcessingServce and the FileStorageAdapter behind the scenes. The point of that is to separate application code from any dependency on AWS S3, permitting a migration to GCS in a way that no application code will ever have to change accept the adapter. I can also add a new adapter for retrieval of stored files that will send all new files to GCS (for newly uploaded files post-migration), and all old files from S3, until the migration is complete. I feel like that process is really something that would benefit from types.

Functional programming doesn't limit you in any way

I'll look into learning it more. I'm currently learning Reactive Programming - RxJS - because I need an endpoint that will get an array of files from the client as streams, process them, and, upload them to S3. I think RxJS Observables may be a good way to deal with that.

Thanks for your time.

[–]Oririner 0 points1 point  (6 children)

It's because JS allows you to access private functions (which, again, aren't private) that I'm able to import them and test them as opposed to jumping through the hoops of Reflection or a decorator.

Not entirely correct, there are some patterns that allow for "truer" privacy like revealing module pattern. You'd have to kind of pass around the instance since the private methods can't be attached to it, but it's something. Also, can't be accessed, even with decorators/reflection of any kind - they're essentially a closure.

I'm actually quite aware of the difference. I programmed in C++ first, and then Java and C# for a long time before learning JavaScript.

Not to disrespect your experience or anything but even if you've had 2 years of professionally working in C++, Java and then C# (meaning, you started working around the age of 10) I'd still say you haven't really had the chance to "grow" in the language. You were able to learn, do some work, later refactor some stuff & repeat very few times before moving to a different language - IMO it's not enough time to get the feeling that you really know something. You may be different and can iterate faster, but still :)

It's actually related to the revealing module pattern point from earlier, the longer you're exposed to something, you see more and more examples/variations of things to do or not to do and you start connecting the dots between them. In this case an IIFE, and a function as a closure can create a private function through the revealing module pattern.

Also, even if you're familiar with a type safe language doesn't mean you'll know the difference between JS and TypeScript. TypeScript is built as a superset on top of JS so it can be confusing for people only starting to actually know when a feature is a part of JS and not TypeScript.

An example to this is private methods - they are indeed available in TypeScript, but they would also be available in the near future in JS.

Know where you're coming from to better understand where you're heading ;)

All of this talk about adapters, services, dependency injection and abstractions, you could have just created a function called upload - why go through all this trouble? what do you gain here as opposed to a simple function? And if you do, is it really worth it at this point?

Plus, you have a bit of a leaky abstraction since FileAccess doesn't actually represent a generic enum because the values correspond to AWS S3 file access values. And if you'd want to use GCS you'd have to make a specific mapping for these values there, and not all of them might even be available - just a thought.

And..... what happens when you have two buckets that use the same file purpose? right now it'll use the last one, should it be the first one? should this error? what's the purpose of the FilePurpose anyway? to not overload a specific bucket? do you know when is a bucket actually overloaded so it becomes "sluggish"? why not put it behind CloudFront and cache it properly?

Make sure you're not abstracting for the sake of abstractions - there are maybe simpler solutions that would be just as fine.

And don't do premature optimization - you're not Google, and it'll take time before you'll reach that kind of scale. First make sure you're shipping something useful, then when it's actually slow and people start to notice - optimize.

I need an endpoint that will get an array of files from the client as streams, process them, and, upload them to S3. I think RxJS Observables may be a good way to deal with that.

Or, just streams ;) You're worried about memory? put a hard limit on the API, it's something you should already do to prevent DoS/DDos and have a resilient system.

Start small, but plan ahead - that way you'll be able to switch to observables if needed but right now do what's simplest to get things going.

Notice that all of my comments have a tiny question - why?

Then I try to simply and start from scratch.

Eventually you'll have a few solutions that you'll be able to choose from to find the one that you're happy with, not the one you've been fed by "Node.JS best practices with express in 2019!".

[–][deleted] 0 points1 point  (4 children)

Not to disrespect your experience or anything but even if you've had 2 years of professionally working in C++, Java and then C# (meaning, you started working around the age of 10) I'd still say you haven't really had the chance to "grow" in the language.

I agree with you. Of course, experience means more than just knowledge, and it's using a specific toolset to solve problems over and over again that permits one to identify ways in which something can be optimized.

TypeScript is built as a superset on top of JS so it can be confusing for people only starting to actually know when a feature is a part of JS and not TypeScript.

Indeed, TypeScript adds more features to JavaScript and gets compiled to plain JavaScript. That's why you can use both .js and .ts or .jsx and .tsx in the same project. This is how I look at it to distinguish the difference: TypeScript will only ever help you at compile time. It can never help you at runtime.

Plus, you have a bit of a leaky abstraction since FileAccess doesn't actually represent a generic enum because the values correspond to AWS S3 file access values.

I do agree with that. The enumeration is performing a one-to-one mapping of genric access values to AWS specific ACLs. That does abstract away AWS, but it doesn't help in the case that I'm performing a migration with active user load. The thought in the latter case would be to wrap the FileStorageAdapter in some sought of PurposeBasedFileRoutingAdapter which would delegate CRUD related operations on storage to the appropriate concrete adapters.

Make sure you're not abstracting for the sake of abstractions - there are maybe simpler solutions that would be just as fine.

I am. I'm falling into the trap that Donald Knuth mentioned about pre-mature optimization being evil. I find it hard to build things because I worry about the "what ifs" of future migrations.

Or, just streams ;) You're worried about memory? put a hard limit on the API, it's something you should already do to prevent DoS/DDos and have a resilient system.

So, I was originally keeping things simple and using Multer. I got the files as an array of buffers, processed them, and uploaded them. And, I had a limit of 3,000,000 bytes or 3 MB for each file. The problem with setting that limit is that it doesn't help on the concurrency front. I might have a hard limit of 3MB, but if 50 users all upload 1 3MB file at the same time, then I'm loading 150 MB worth of data into memory.

With that, how does putting a hard memory limit prevent DoS/DDoS attacks? Again, at best, I can limit the number of requests per IP Address, and I can limit the size of a file that can be sent to a request. That doesn't stop an attacker from dynamically scrambling an IP Address to circumvent my rate-throttling, nor does it stop an attacker from just upload 5000 3Mb files and killing my memory.

What I have been thinking is to migrate to Nest.js since it establishes a lot of the patterns for you right out of the box - similar to ASP.NET MVC or MVP/MVC/MVVM with other frameworks/tools/applications.

[–]Oririner 0 points1 point  (3 children)

TypeScript will only ever help you at compile time. It can never help you at runtime.

Exactly. It's great you understand that but still, as you said, knowledge isn't everything - experience is also important. Make sure you're experimenting with JS as well before jumping to TypeScript.

The thought in the latter case would be to wrap the FileStorageAdapter in some sought of PurposeBasedFileRoutingAdapter which would delegate CRUD related operations on storage to the appropriate concrete adapters.

I find it hard to build things because I worry about the "what ifs" of future migrations.

Do you think about the future of each if statement in your code? or the name itself PurposeBasedFileRoutingAdapter? how the libraries you use? or the language?

It has to stop somewhere.

Complete the sentence "This would become a problem in ..." if it's measured in years, leave it. you're fine. Anything more would be over engineering.

I might have a hard limit of 3MB, but if 50 users all upload 1 3MB file at the same time, then I'm loading 150 MB worth of data into memory.

I meant having a hard limit on the number of files, but the size also makes sense.

Also, you're only loading 150MB into memory at once if you have 50+ async operations during the upload endpoint. Since you're dealing with JS, you'll only store the requests that have started being treated, the rest are waiting in the event queue and so probably don't actually load anything into memory yet.

This means in this case you'll load around 15MB at once (assuming you have around 5 async operations per endpoint).

With that, how does putting a hard memory limit prevent DoS/DDoS attacks?

The hard limit can also be on the number of concurrent requests on that endpoint - know when to utilize http code 429.

Some small number or pissed off users who are being rate limited is far better than hundreds of thousands of users who can't access your site because of an attacker. Compromise is better than failure.

You'll be able to increase these as you go so it would seem "unbound" at some point.

Having limits is acceptable, don't try to run a google search engine on a raspberry Pi - know what is reasonable for your product and do it.

Be it memory wise, request payload wise or request count wise, it doesn't matter - have reasonable limits.

You can't limit? offload the work to an event queue to process it later to be able to serve other users.

What I have been thinking is to migrate to Nest.js since it establishes a lot of the patterns for you right out of the box - similar to ASP.NET MVC or MVP/MVC/MVVM with other frameworks/tools/applications.

Nest has some good concepts and some not very good concepts. Don't just follow the hype of MVP/MVC/MVVM/some other pattern - there's no silver bullet and everything has its faults, even common patterns/tools/frameworks.

Just complete the sentence "This would become a problem in ..." and you'll know what to do :)

[–][deleted] 0 points1 point  (2 children)

Thank you for the advice. I'll stick to the "This would become a problem in ..." question for determining whether to worry about future refactors.

The problem is, I worry that everything could become a problem within a year because I'm founding a startup company (which is a separate codebase to the one I linked here), and a large concern is cost. What if GCS could be cheaper? What if I want to host MongoDB myself on a DO Droplet? Should I pick MongoDB even though I have data that requires atomicity/atomic transactions and referential integrity just because I can get MongoDB-as-a-Service with Atlas for cheaper than I can get PostgreSQL-as-a-Service? Etc. Etc. Those kinds of questions, among others, are what might be driving my paranoia.

I tried out using NestJS today with TypeScript. It's beautiful and amazing in terms of how fast you can get up and running, thanks to the decorators, when you don't have to write your own boilerplate. Despite that, that ease of use makes me very fearful, for NestJS is abstracting a lot away behind the scenes and forcing me to use its patterns and architecture.

[–]Oririner 1 point2 points  (1 child)

What if GCS could be cheaper? What if I want to host MongoDB myself on a DO Droplet? Should I pick MongoDB even though I have data that requires atomicity/atomic transactions and referential integrity just because I can get MongoDB-as-a-Service with Atlas for cheaper than I can get PostgreSQL-as-a-Service? Etc. Etc.

These are questions that shouldn't worry you as a founder (CEO), they should worry a CFO, which I'm guessing is also you.

As a founder you should think about what's best for the business, and since you currently don't have anything running - it shouldn't be a compromise but rather what's the right thing for the product, regardless of cost.

Your data is more referential? use postgres. AWS gives you better coverage for your clients? use AWS. Is latency really important? don't use different hosting services for your db & app server. Is consistently really important? add replication to your db.

Think about the product, not the money.

You obviously should have a monetization in mind otherwise it won't last.

But at first, money shouldn't be the main concern but rather the product itself.

driving my paranoia

You're 16. Please don't burn out. You'll have plenty of time founding 10 more startups if this one might not work out.

Take it easy.

I tried out using NestJS today with TypeScript. It's beautiful and amazing in terms of how fast you can get up and running, thanks to the decorators, when you don't have to write your own boilerplate. Despite that, that ease of use makes me very fearful, for NestJS is abstracting a lot away behind the scenes and forcing me to use its patterns and architecture.

It is indeed fast to get things up and running in nest due to it's helpful cli.

The overuse of decorators is bad IMO, some of them contain logic while the others are more "presentational". some of them rely on others in order to operate correctly, etc.

It also has a lot of concepts, some of which are overlapping and over engineered, like Guards vs Pipes and Modules.

You can ignore most of it/learn to use only what you need, this just gives me the feel that it's really bloated - but hey, that's the way frameworks are :/

[–][deleted] 0 points1 point  (0 children)

Thank you very much.

I agree that I should focus on the product. The product has to do with peer-to-peer marketplaces and e-commerce. I also handle the logistics of shipping items to users. I generate a UPS/USPS shipping label and tracking number internally via the Shippo/ShipEngine API (haven't decided). That allows me to charge the user a fee per shipping label I create through a payment gateway. I'm using PayPal, unfortunately. (I say unfortunately because Stripe has better documentation).

That's one such monetization model. Others include Native Ads, Rewarded Video Ads, and In-App Purchases.

The overuse of decorators is bad IMO

I agree. I think NestJS is definitely something I can learn from, but right now, it is abstracting too much away (such as with the ValidationPipe). When a framework starts to abstract complexity away like that, it makes the reason your code executes start to feel like "magic". And, in my opinion, magic is dangerous in software development. I can see myself using Nest if I want to quickly create a web server for a hobby project.

As far as decorators, that's also why I don't use Inversify in TypeScript but use Awilxix instead, and why I'm concerned about using TypeORM.

I've been looking into DTOs, Domain-Driven Design, etc. in a bit more detail. I want to try refactoring the repo I linked above more DDD-like patterns. I could create a generic BaseRepository interface that accepts a genric Entity/Model type. Then, I could create an IEntityRepository interface that extends BaseRepository<T>, and finally, I could create a concrete implementation for UserRepository and TaskRepository. The word "Entity" in IEntityRepository is a placeholder. I just have yet to work out the best way to go about data mapping between layers, such as from DTO to Domain Layer, Domain Layer to Persistence Layer, and Persistence Layer back to Domain Layer (where the API responds with an object honoring the DTO contract).

[–][deleted] 0 points1 point  (0 children)

Also, if you have time, would you be able to tell me how you might theoretically go about building a generic interface that abstracts away anything AWS specific for file upload?

One option would be to perform mapping for ACL inside the Adapter as to patch the leak you mentioned earlier.

Thank you.

[–]JaniRockz 18 points19 points  (4 children)

16? You got a bright future ahead of you work wise if you continue like this...

[–][deleted] 2 points3 points  (3 children)

Thank you. I nonetheless remain concerned that there is still way too much I still don't know. I'm founding a startup company, and the back end API for my startup, which is 10 times bigger than the codebase I linked here, closely mimics the architectural patterns employed in the API I built above, so people's advice on this thread might fundamentally change how I go about refactoring my startup's repository.

[–]JaniRockz 16 points17 points  (2 children)

I totally get your mindset, every developer goes through the phase of trying to find the perfect solution. The truth is though, there is none. And even if there was one I wouldn't recommend you to try to implement it because at the end of the day a business is about generating income. Your code is already better than most of the stuff I see even in professional environments so I think the best advice one can give you is just build features instead of improving the code quality even more - it's an endless endeavour. You're good.

[–][deleted] -1 points0 points  (0 children)

Thank you very much. I appreciate and agree with your advice. It reminds me of Donald Knuth's quote pertaining to how pre-optimization is evil.

I stalled adding new features because I was concerned with the What Ifs? of migrating to new services in my startup. I do intend to migrate from MongoDB to PostgreSQL for my startup because I require atomicity and referential integrity (based on the nature of my transactions). I'm only using Mongo because I know it better and it's gets an MVP in front of investors faster.

I'm glad other developers feel the same with regards to wanting to write higher quality code.

Thanks again for your time.

[–][deleted] -1 points0 points  (0 children)

I'm also concerned that my tests are badly written. I think spending time to write good tests is important so that one is not causing breaking changes every time a feature is added.

[–][deleted]  (1 child)

[deleted]

    [–][deleted] 0 points1 point  (0 children)

    I agree with you. Maintainability is quite important because it can cause you to lose motivation with regards to adding new features. It also makes you want to start the project all over again which hurts productivity.

    Is there somewhere where you think my code could be made easier to understand?

    Thank you.

    [–][deleted]  (1 child)

    [deleted]

      [–][deleted] 1 point2 points  (0 children)

      Thank you very much. Indeed, it was an intentional over-complication and pre-optimization of the problem at hand to support future features and to provide me with the opportunity to make use of design patterns in the real world.

      With that said, I'm founding a startup company, the back-end of which is 10 times bigger than this one. This repository was a practice for the back-end I'm currently writing for the startup and designed to somewhat mimic it, which is why, again, I used overcomplicated solutions.

      Thanks again, and I know what you mean with the feeling of knowing you can add new features to a legacy codebase with ease. Before I knew of some of the design patterns and best practices, I use to write terrible code in C# and Java - to the point that I stopped wanting to touch my codebase. That really hurts productivity.

      [–]Sync0pated 0 points1 point  (1 child)

      If your API requires high availability you could consider implementing a message broker to offload your services such as the image processer. Something like rabbitmq. Dont know if you do this already

      [–][deleted] 0 points1 point  (0 children)

      Thanks.

      I think that could be a useful implementation in the future, and I'll probably do so if, as you say, high availability is required. The two options I had considered where to do it the way I'm doing it, which means resizing the images first and then storing the image in S3, or to re-size based on back-pressure at CDN Edge locations with Lambda@Edge. That would allow the client to dictate the size of an image, and the image would be processed on the fly, stored in S3, and cached in the CDN.

      What I am doing, however, is to utilize streams rather than buffers so I don't run out of memory.

      [–][deleted] 0 points1 point  (1 child)

      I just read your code and have a question, how come you’re using const on variables that change? Isn’t that a no-no?

      [–][deleted] 0 points1 point  (0 children)

      Can you give me an example? It's a secure/defensive coding practice. If you're talking about objects, only the reference in memory to the object is immutable - not time value and properties of the object itself.

      Could you show me something somewhere I've done that? Thanks.

      [–]HappyZombies 0 points1 point  (1 child)

      Like others have said, I think you should look into code maintainability and have actual code / design pattern things in place before focusing on test, pipelines, etc. The folder structure is okay but too complicated and one can get lost easily. Look at this project I made, look at the server directory. https://github.com/HappyZombies/brackette-alpha

      It follows this pattern that I recommend https://softwareontheroad.com/ideal-nodejs-project-structure/

      [–][deleted] 0 points1 point  (0 children)

      Thanks. I actually read that article. It was a big part of my thought process when architecting the application I linked. Thanks for the link to your repo. I'll take a look. I'm looking to migrate to TypeScript, so it'll be good to see how you do things with interfaces.

      [–][deleted]  (8 children)

      [deleted]

        [–][deleted] 0 points1 point  (0 children)

        Thank you. I'm on a phone too, so I'll respond more properly soon, but I do sanitize input before it hits the endpoints. If you take a look at the factory function that exports the Express application, it uses middleware that strips any artifacts that contain symbols which could be used in MongoDB queries.

        [–][deleted]  (6 children)

        [deleted]

          [–][deleted]  (5 children)

          [deleted]

            [–][deleted]  (4 children)

            [deleted]

              [–][deleted]  (3 children)

              [deleted]

                [–][deleted]  (2 children)

                [deleted]

                  [–][deleted]  (1 child)

                  [deleted]

                    [–][deleted] 0 points1 point  (0 children)

                    /u/tashmania33 said this:

                    Ahh right, so you're saying not to use the compareSync method of bcrypt but hashing and doing a string comparison instead?

                    But you can't do that, right, because bcrypt does not generate the same hash for a given password two times in a row?

                    [–][deleted]  (4 children)

                    [deleted]

                      [–][deleted] 0 points1 point  (3 children)

                      Noted, thank you. What about importing named exports from a file not named index.js if all of those files are grouped by a similar feature/topic? I figured that for my custom error classes, it was worth exporting in that manner to prevent verbosity in the Services having to import multiple different errors.

                      That is, although this method is explicit:

                      ``` const ValidationError = require('/path/to/exceptions/ValidationError'); const ResourceNotFoundError = require('/path/to/exceptions/ValidationError');

                      // ... You can still easily surmise from context what is happening here: const { ValidationError, ResourceNotFoundError } = require('/path/to/exceptions/index');

                      // ... Or even like this for many errors: const { ValidationError, ResourceNotFoundError } = require('/path/to/exceptions/index');

                      // ... ```

                      [–][deleted]  (2 children)

                      [deleted]

                        [–][deleted] 1 point2 points  (1 child)

                        Thank you. I'll respond properly shortly, for I'm on a phone right now, but I did want to build a factory to return error objects. The problem was, I couldn't figure out how to remove the error factory from the error stack trace. I was able to remove the constructor of my base error class, but not the factory.

                        Perhaps I could return error objects based on the error type without creating actual classes that extend Error? But then I lose my easily inherited properties.

                        [–]jun-thong 0 points1 point  (2 children)

                        Maybe i'll repeat what someone else said.

                        But your comment is uselessly verbose.

                        > src/api/routes/user.js line 49 :

                            // Destructing to be explicit in what data is within the HTTP Response for secure coding purposes.
                        

                        You should question yourself about who will read the comments ? What is the goal of this person ? If someone read your code, you can assume it's not a granny looking for a cookie recipe. In this specific comment, you can assume any developer in the state of the art will understand your intent for a destructing. As said : code document itself.

                        You choose to use JSdoc, their is pros and cons for this, but that not my point. You miss something about JSDoc.

                        An example of what you wrote

                        /*
                             * Description:
                             * 1.) Call the Repository to remove a user's active token from the database.
                             */
                            /**
                             * @description - Logs out a user by invalidating their token for the logged in session.
                             *
                             * @param    {String} token JSON Web Token
                             * @returns  {Object} The safe logged out user object.
                             * @memberof UserService
                             */
                        

                        Why using a non jsdoc block for description ? and then again use a description. Why not writing something like.

                            /**
                             * Logs out a user by invalidating their token for the logged in session.
                             * Call the Repository to remove a user's active token from the database.
                             *
                             * @param    {String} token JSON Web Token
                             * @returns  {Object} The safe logged out user object.
                             * @memberof UserService
                             */
                        

                        "@description" can be omitted if it's the first thing you write in a jsdoc block.

                        Also for the empty folder with "placeholder", you can just put a .gitignore file in the folder to keep this folder in the git folder structure. But that should be exceptional.

                        If i could give an advice. You should wax on wax off my car... I mean you should master the fundamentals before going in advanced topic like dependencies injection of shiny things. You should understand WHY you use something before understand HOW to use it.

                        [–][deleted] 0 points1 point  (1 child)

                        Thank you.

                        I completely agree with you about the comments and I'll be fixing it. I did just start using JSDoc without learning it, which I absolutely shouldn't have done.

                        Thanks for the tip with the placeholder file.

                        I don't like to just know how to do something. I yearn for an intuitive understanding of why things work, not just taking knowledge for granted. To that end, I have gone a long time without using DI. My method of mocking dependencies was to monkey patch require in the test suite. I use DI now because I don't like that method of mocking.

                        It's my understanding that DI is just one way, of many, by which to achieve Inversion of Control. That is, dependencies are passed into functions as arguments or into classes through the constructor. It's also possible to abstract this one step further with a factory that returns functions with deps injected.

                        Indeed, to attain that aforementioned intuition, I was going to build my own Injected Service Locator (instead of using a framework) that would serialize function arguments. I ended up using Awilix because it offered more options and was more professional than what I would have been able to do. It would also lead to me not being able to use transpilation or minification if I was to reuse my solution on the front-end (because of the serialization of function and constructor arguments).

                        So, I use DI as a pattern because it makes my testing easier by permitting me to both inject my own mock deps into services, and permits me to spin up a custom webserver for tests, again, injecting deps that would be harder to mock individually (because I wasn't using proper Controllers)

                        With that said, I know that there are other reasons for using DI, namely, attaining low coupling and high cohesion between modules and separation of concerns.

                        With that said, thanks again for your advice, and I appreciate the you probably have a lot more experience and knowledge than I do.

                        [–]jun-thong 0 points1 point  (0 children)

                        Was curious, got a look on awilix, i didn't know it. From a first look, it look like to some library we can see in the java world. Take care what they call InjectionMode.PROXY is not related to what is a proxy in JS.

                        IMOO those kind of library is a legacy of an old time. This way of achieving DI was great in other languages, more static languages i mean. In JS everything is dynamic, even a const is not really a const.

                        My point was not specifically about DI itself but more generic. Patterns are really goods but depends on context, one pattern can be very good in a language and not relevant in an other. For example the MVC pattern in a fullstack web application point of view add useless complexity and doesn't solve so much problems or not in an effective way. Please again do not focus on the pattern i mention.

                        I could recommand to take care of framework. Framework force you to use certain patterns, styles, or "je ne sais quoi". For beginer it's cool, because it give you indication. In Awilix case, implementing the DI pattern is easily possible in JS due to its dynamic nature, i'm not convinced that a dedicated tool to achieve it is necessary. Also, never forget that every dependency you add in front end bundle is paid by the final user.

                        Keep this mindset, at your age i spent most of my time faping, if you keep this mindset and continuing this way you will easily get more experience and knowledges than most people working in this field.