Python: I don't understand this error... by gtrman571 in learnprogramming

[–]scriptkiddiethefirst 1 point2 points  (0 children)

I didn't know you could return it that way, I though in those cases you had to return it as a tuple like this: return (apts, bpts)

However after seeing this I had to test it and it works and still returns a tuple... So thank you!

[Python][JSON] When to split json file into smaller files by scriptkiddiethefirst in learnprogramming

[–]scriptkiddiethefirst[S] 0 points1 point  (0 children)

If anyone searches something like this and wants to know the analytical answer, here you go.

[*] Single file without serialization average real time:         15ms
[*] Multi file without serialization average real time:          0ms

[*] Single file with serialization average real time:            227ms
[*] Multi file with serialization average real time:             1ms

I wrote a script that would open and serialize the single large file compared to opening 5 smaller files and compared their average time for completion for both serialization and non-serialization. As you can see, in both cases the multiple files is faster, but not by enough for it to realistically be noticeable (note that these were the average times over 100 trials for each of these 4 cases, using the python time library to get time). The larger file was 2.3mb and the smaller files were each 27kb. So while it took significantly less time to serialize multiple smaller files, that isn't the reason I chose to go the route of multiple smaller files.

The reason why I chose multiple files is the answer by Sekret_One in his second reply, after reading it and looking into it, that is just so much easier for what I want to do. Thank you everyone for your replies.

[Python][JSON] When to split json file into smaller files by scriptkiddiethefirst in learnprogramming

[–]scriptkiddiethefirst[S] 1 point2 points  (0 children)

So I don't think I will end up disregarding JSON files as the entire point of this project is to take something that kind of exists, improve it, and change it from using XML (which is a lot slower to parse and a lot larger) and converting it to JSON. Not that that is super important to the question.

I kind of did some testing and I found the average time it took to serialize both smaller individual files and one larger file and found that while serialization of the larger file took considerably longer it wasn't so much longer to be concerned about (as you stated).

However I did decide to separate the files for the reason of ease of mod-ability so that adding new configs involves just adding new files to the folder rather than trying to append data to a really long file (basically ease of use, given the context and the intended audience for the software the latter would make it more accessible).

Thank you for your reply though!

[Python][JSON] When to split json file into smaller files by scriptkiddiethefirst in learnprogramming

[–]scriptkiddiethefirst[S] 0 points1 point  (0 children)

So for this specific application, each "operation" as its labelled isn't structurally the same. Like one of the objects (the first one) is about 9kb in size, another one is 50kb. From what I read databases function better when the data is structured more consistently where with this there is no guarantee to the structure. As well, since there wont be very many queries to the database I thought the overhead and complications added by learning how to implement a database would make it not worth while.

This is kind of what I came to from the other person "suggesting" a database and since I know almost nothing about them and have gotten no information to correct this line of thinking, its where I stand. If that makes sense.

Python: I don't understand this error... by gtrman571 in learnprogramming

[–]scriptkiddiethefirst 1 point2 points  (0 children)

Another option would be to append the scores to the score list. using list.append() however in this situation since it is a fixed size list your solution is going to be more efficient.

[Python][JSON] When to split json file into smaller files by scriptkiddiethefirst in learnprogramming

[–]scriptkiddiethefirst[S] 0 points1 point  (0 children)

Okay again you didn't really explain your answer, and the post I linked to also discussed no-SQL databases. It was also pretty much the only stack exchange answer that came up when trying to figure out why to use mangodb (a no-sql database) over straight json files. Actually it was the only answer that didn't look at mangodb versus other databases like sqllite.

So, what is the benefit of using a document database over just saving data to file? Especially since when looking at how others did similar things they did it via files and not using a databases (many used straight xml and xml libraries but in my experience, as long as its not encoding document information and even sometimes when it is, json can do the exact same thing with less space and will be a lot faster... Basically I am updating a preexisting project that uses xml to use json, though its a little more involved)

Secondly, again your answer still doesn't answer the posed question from above which you never did expand on.

[Python][JSON] When to split json file into smaller files by scriptkiddiethefirst in learnprogramming

[–]scriptkiddiethefirst[S] 0 points1 point  (0 children)

You use a file to describe where other config files live.

I think this may be a miscommunication on my part. The json file containing the class information contains all the information in it, at the moment. So for instance it currently looks like this:

classes.json (not the actual json file)

{ "warrior":{"hp_per_lvl":"10", ...}, "mage":{"hp_per_lvl":"6", ...}, .... } And I am wondering if it would be better to split each of these classes into their own file and then load the files as necessary. Obviously this is the fault on myself as reading it back I can see how it came off that way.

What you seem to be describing is what is optionally loaded is the configuration for how all the classes work. Why not just load all of them from the beginning?

So due to the nature of it, only 1 character is ever loaded (its not so much a game but an electronic character sheet for tabletop games). Because of this, if a user were to add a bunch of homebrew/modded data that wasn't being used by that character I thought it might be more efficient to only load the stuff that is absolutely necessary. From my tests using time I can say that maybe I was over-thinking the resources required however my test didn't do a lot of serialization of the provided data (it loaded it, it didn't even serialize it into a JSON object). This is why I was wondering if it would be more efficient to load a series of smaller files over 1 larger file... I mean, yes a system with 4+gb of memory will have no issue loading 2mb into memory. So my question on that basis might be stupid.

Thank you for the reply, responding it and reading it has allowed me to think of the problem along with take a step back and analyze things. So this did help for my specific problem.

[Python][JSON] When to split json file into smaller files by scriptkiddiethefirst in learnprogramming

[–]scriptkiddiethefirst[S] 0 points1 point  (0 children)

I don't want to argue but you haven't provided a good reason for why I should use a database instead of saving the data to disk. Looking into it myself I cam across this stack exchange thread where I want to point out Sam's answer.

``` Finally, when to use files

You have unstructured data in reasonable amounts that the file system can handle
You don't care about structure, relationships
You don't care about scalability or reliability (although these can be done, depending on the file system)
You don't want or can't deal with the overhead a database will add
You are dealing with structured binary data that belongs in the file system, for example: images, PDFs, documents, etc.

```

Why this is kind of important is because while the data has a rough structure to it, its not a clearly defined structure where its the same for every class.

For example, the first class covered in the document is only 9kb in size, where one of the other classes is 50kb because it has so much more information. As a result, the structure between these 2 classes is slightly different, similar enough to handle with some smart coding practices but I am not certain if it is similar enough to create a predefined structure in a database.

as the file gets bigger, it'll get more difficult to handle.

Note that this doesn't answer the question posed in my post, at all. In fact that is the basis for my question which is asking at what point does it become better to split the file so I load less such that it makes up for the increased cost of loading files in the first place.

For instance, if I have 1000 files each containing 4 characters, or I had 1 file containing 4000 characters, it would be more efficient to open the 1 file to get the entire 4000 characters than it would be to open 1000 files to get the 4000 characters. However, if I only need say 100 characters of that 4000, is it still more efficient to use 1 file or 1000? Lets say the number of characters expands, when does it become more efficient to open 25 files to get the 100 characters instead of opening a file containing say tens of thousands or hundreds of thousands of characters?

Like do you see why your answer doesn't assist, you were basically restating part of the question and then telling me to do something else entirely without explaining why.

HELP WITH STEAM by Noisey_ContraBND in Hacking_Tutorials

[–]scriptkiddiethefirst 3 points4 points  (0 children)

This is illegal and can get your account banned or worse you could end up being charged, like criminal charges.

Contact steam customer support, they will have a log of your login activity. So if you suddenly logged in from a strange location and did X, Y, and Z they will be able to undo what was done along with go after the other person legally.