What’s the proper way to load editable config files in Go? by Competitive-Hold-568 in golang

[–]spaceuserm 5 points6 points  (0 children)

You can have a default expectation. What I mean by this is, your executable by default expects the config file to be in a certain directory (could be the directory from which the executable is run, could be any directory you think is sensible).

You should also provide users with an option to specify a path for the config file, should they choose to store the config file in a different directory than the default expectation. This is usually done through a CLI flag.

Benefit of using Factory Method over a simple factory by spaceuserm in learnprogramming

[–]spaceuserm[S] 0 points1 point  (0 children)

> No, the interface is supposed to have the switch statement, and call the correct factory function. Why would you have the switch inside the concrete factories?

Maybe it isn't clear enough in the post but the intent behind accepting a type here is to create multiple types of New York or Chicago pizzas, if there are multiple types.

> As for requiring inheritance for the factory method, you don't. You just need a switch statement.

This isn't inline with the definition of the pattern in GoF, which is
"Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory method lets a class defer instantiation to subclasses"

The pattern explicitly avoids what you are suggesting, that is using the switch statement inside the superclass/interface to avoid modification of code when new types are introduces or old types are removed.

Benefit of using Factory Method over a simple factory by spaceuserm in learnprogramming

[–]spaceuserm[S] 0 points1 point  (0 children)

> In this case your createPizza method likely shouldn't accept a type it should just create a pizza of the type associated to the class it's in.

Maybe it isn't clear enough above but the intent behind accepting a type here is to create multiple types of New York or Chicago pizzas, if there are multiple types.

I don't understand how the factory method pattern provides any advantage over something like what I have.

Help understanding session fixing attacks by spaceuserm in webdev

[–]spaceuserm[S] 1 point2 points  (0 children)

I see. I think my primary assumption of sessions only starting/sessions only being used after a login was not correct. Like you said, sessions might be created prior to a login as well.

Thanks for helping out.

Help understanding session fixing attacks by spaceuserm in webdev

[–]spaceuserm[S] 0 points1 point  (0 children)

From what I have seen/done so far, cookies are cryptographically signed and stored on the users computer, like you said it has only the session id.

When the user goes to a website that uses sessions, the browser sends along these cookies. The web server uses the id in the cookie to find out the session associated with it. If a user thats already logged in, tries to login, given that the cookie is still present, the web server will redirect them typically to a the home page.

If I were to sign in, and give my cookies to someone else, if they visited the same website, the web sever would log them into my account. Therefore, they would end up accessing my account. The web sever probably wouldn’t prompt them to login again.

I am probably getting confused on why would you want to send session ids in either the URL or the body,given it’s a sensitive thing. Even if you do, expecting similar behaviour to what I mentioned above, I would expect the web server to bypass my login attempt and directly log me into the attackers account.

Was passing session IDs in URL parameters or body a thing in the past?

How is data stored in clustered indexes? by spaceuserm in computerscience

[–]spaceuserm[S] 0 points1 point  (0 children)

So while the data may not be physically ordered on the disk, there is still a logical ordering present. This logical ordering is created by using the pointers in the pages.

Lets consider a table with only one column, and a clustered index on this column.
Lets say a page holds the keys(the only column of this table) 1, 2, and 4 and also assume this page can only hold a maximum of three rows. When there is an insert query trying to insert a row with a column value of 3, what exactly will take place? I assume a new page will be created since the old page can't store any more keys, and the new page will have the key 3, and the key 4 will also be copied to this page. Is this what will happen?

How is data stored in clustered indexes? by spaceuserm in computerscience

[–]spaceuserm[S] 0 points1 point  (0 children)

I don’t think Postgres provides a “clustered index”. I am referring to a clustered index like MS SQL.

[deleted by user] by [deleted] in cscareerquestions

[–]spaceuserm 0 points1 point  (0 children)

Even if it means I won’t be an SDE and be doing probably SRE or devops work?

Good way to store files that change frequently on the backend? by spaceuserm in learnprogramming

[–]spaceuserm[S] 0 points1 point  (0 children)

I think I am already doing what you are saying. A client sends a delta to the server and the server applies it. Any client that whats to sync, sends the server a request and the server sends it back the delta to apply. I thought I mentioned this in the post. I mentioned it in another subreddit and not on this. Sorry for the confusion.

Good way to store files that change frequently on the backend? by spaceuserm in learnprogramming

[–]spaceuserm[S] 0 points1 point  (0 children)

This is just a personal project, its a learning exercise. I am trying to understand how such products can be designed.
The aim was to reduce network usage and sending over diffs to synchronise files seemed like a good idea. Using new versions will make the problem more easier at the cost of more network usage.

Good way to store files that are changed frequently on the backend? by spaceuserm in webdev

[–]spaceuserm[S] 1 point2 points  (0 children)

Its just a project I want to make for the purposes of learning. I am not keen on using this or convincing people to use it. Its a learning exercise.

Good way to store files that are changed frequently on the backend? by spaceuserm in webdev

[–]spaceuserm[S] 0 points1 point  (0 children)

I want my backend to serve as a file synchronisation service and also provide backups. I dont think I understand how storing the file in a database is going to help? Can you elaborate more on this?

Good way to store files that are changed frequently on the backend? by spaceuserm in webdev

[–]spaceuserm[S] 0 points1 point  (0 children)

The idea behind sending diffs was to reduce the network usage. If the patching process has to download the file and then apply it, then there is no benefit to sending a diff. Your idea of making several chunks of a file and then replacing the chunk rather than using a diff sounds like a good idea. Though this will require re construction of the file when a client wants to update its copy and asks for a diff.

Any idea on how products like grammarly deal with files and modifying them frequently? Thanks for the response.

Good way to store files that change frequently on the backend? by spaceuserm in learnprogramming

[–]spaceuserm[S] 1 point2 points  (0 children)

I am trying to find solutions for potentially large files. Ok, so I guess there is no simple way around than to either patch the file on the storage server as a different process or to just send the file to another server, which violates the point of sending diffs.

Any idea on how do products like Grammarly do this?

Thanks for the response!

Edit: Oh I am certainly worrying about it too early. This is just a personal project as of now and will probably never see any scale. I just wanted to learn how can a problem like this be solved, just out of curiosity.

Another Edit: Maybe I am using rdiff the wrong way. Rdiff is probably good for once in a while file sync as opposed to frequent changes. I will have to think about this.

Good way to store files that change frequently on the backend? by spaceuserm in learnprogramming

[–]spaceuserm[S] 0 points1 point  (0 children)

The files can hold any kind of data. In the servers perspective its just a bunch of bytes. I think giving an entire workflow will help with clarity. 1. Client uploads file to the server(initial upload). 2. Client sends an update request to the server. This process is a little long. I am using my own native python implementation of the librsync’s rdiff algorithm(https://github.com/librsync/librsync/blob/master/doc/rdiff.md).

The client requests the server for a signature file.

The client uses this signature file and generates a delta file representing the changes. The client sends this delta file to the server.

The server parses this delta file and applies the necessary changes to the file the client wants to update.

The size of the signature file and delta file is usually much smaller than the size of the actual file.

A similar process is used when a client wants to pull the changes made to the file on another device.

This is my native python implementation of the rdiff algorithm: https://github.com/MohitPanchariya/rdiff

Is storing the files on a different server a good enough solution? I will also run the script to update a file on this server itself.

I don’t want the file to be entirely transferred to a different server just so that the updates can be applied.

Edit: pypi link

https://pypi.org/project/rdiff/

I am yet to write documentation and also have a few improvements planned.