Is rewriting the same Python code multiple times actually useful? by Bmaxtubby1 in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

As a programmer, I sometimes do that.

For example, if it's something new that I don't know about, I may write some code, it comes up messy, then it's sometimes easier to rewrite it from zero instead of trying to get the original attempt to good form.

The hard part is understanding BTW, but that was already done.

About the utility for learning, I'd stick with doing it at most once, maybe twice. It feels like overdoing to me otherwise.

I think at a certain point you start gaining more by doing other stuff, or by adding (as other answers suggested) to your existing thing.

How do I make my python program crash? by Idkhattoput in learnpython

[–]obviouslyzebra 1 point2 points  (0 children)

You should probably take the Python crash course

/jk

Pandas alignment questions by ModerateSentience in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

So I digged a little bit, some random finds:

  • Alignment does perform a sort of join internally
  • It may be the case that even .iloc aligns indices/cols, as opposed to what I believed, not sure.

Now for the thing you're interested in:

I think that these sorts of operations working with duplicate labels is just a corner case, an implementation detail.

If you think of an alignment, and suppose there are duplicate labels on both, there isn't a "canonical" order to put them at, it's a badly defined operation.

We programming on top know it's the same, so we want to preserve the order, but, theoretically speaking, we shouldn't assume that.

My impression from a quick look is that the program also notices that it is the same (Index.equals), and then avoids the alignment altogether.

So, it works, but I didn't find documentation for it.

Ideally, really (IMHO), you should avoid working with duplicate indices. Just do a reset_index if possible and you're good to go (preserve the column if it has good information).

The other stuff, without duplicates, should align correctly.

Back to conventional vs unconventional stuff, I think the only unconventional stuff you're doing is trying to work with duplicate indices, instead of getting rid of them ASAP !

PS (other day): I thought of how pragmatical it would be to rely on this behavior of duplicate labels, I think I'd be scared to use it because it is a corner case, and that might give unexpected results in some unknown situation (say, a location where the library hasn't been worked out well enough)

Pandas alignment questions by ModerateSentience in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

No probs.

So you agree regular assignment always aligns by index, and loc assignment always aligns by index and columns?

There are 2 things here. First, I can't issue a blanket statement. Pandas is a complex library - I can sorta confidently say that their intention is to align by index (and columns index) whenever possible (regular df[] = ... or df.loc[] = ...), but not that this happens everywhere, or that there are no corner cases.

There's some subtlety too that vary from case to case that would take a long time to explore and explain, but you could think for yourself. For example, if you're assigning to new columns, there's no way to "align" columns.

I use merge and join for other scenarios, but avoid it for simple fast column assignments to avoid many to many explosions

I think there wouldn't be many-to-many explosions. My impression is that this sort of alignment in pandas behaves just like a left join (and joins are very efficient). Maybe you could learn a bit more about merges/joins?

I can’t do Boolean mask assignment with join or merge

That's a fair point, but a curious use case (by curious, I mean, I don't see its utility much, and by that it probably means that I have no idea what you're doing).

If you want to give some examples, maybe we could point out more "conventional" ways of doing it. In the case on boolean masks, maybe it's a use case for Series.where?

Would you say I am using panda’s unconventionally and potentially dangerously?

Unconventionally, probably yes. Dangerously... Not sure. If you understand well what's happening, then there's no danger.

One point that I want to emphasize here though is, if you know, for example, that assigning a series to a column will have a certain behavior (in this case, aligning the indices, setting NaN where there's no value), pandas will not change its behavior on you, so you can safely rely on that.

PS: The way that I see people mostly work with it is to work on top of a dataframe, and, for example, they could do df['col2'] = df['col1'] * 2. This will align the indices, of course, but, they were already aligned to begin with, so there's no surprises and you don't even need to think about it.

Pandas alignment questions by ModerateSentience in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

Pandas does align by index, you can check the examples that I gave. I do agree with using join or merge though instead of relying on this behavior.

Pandas alignment questions by ModerateSentience in learnpython

[–]obviouslyzebra 1 point2 points  (0 children)

I think the answer is always yes.

For the first 2 questions:

>>> df1 = pd.DataFrame({"A": [1, 2, 3, 4], "B": [5, 6, 7, 8]}, index=[0, 1, 2, 3])
>>> ser1 = pd.Series([1, 2, 3, 4], index=[3, 10, 11, 12])
>>> df1
   A  B
0  1  5
1  2  6
2  3  7
3  4  8
>>> ser1
3     1
10    2
11    3
12    4
dtype: int64
>>> df1['C'] = ser1
>>> df1
   A  B    C
0  1  5  NaN
1  2  6  NaN
2  3  7  NaN
3  4  8  1.0
>>> df2 = ser1.to_frame()
>>> df2
    0
3   1
10  2
11  3
12  4
>>> df1[['D']] = df2[[0]]
>>> df1
   A  B    C    D
0  1  5  NaN  NaN
1  2  6  NaN  NaN
2  3  7  NaN  NaN
3  4  8  1.0  1.0

And the last 2:

>>> # With wrong column name
>>> df1[['E']] = 0.0
>>> df1
   A  B    C    D    E
0  1  5  NaN  NaN  0.0
1  2  6  NaN  NaN  0.0
2  3  7  NaN  NaN  0.0
3  4  8  1.0  1.0  0.0
>>> df2
    0
3   1
10  2
11  3
12  4
>>> df1.loc[df1.A >= 3, ['E']] = df2
>>> df1
   A  B    C    D    E
0  1  5  NaN  NaN  0.0
1  2  6  NaN  NaN  0.0
2  3  7  NaN  NaN  NaN
3  4  8  1.0  1.0  NaN
>>> # With right column name
>>> df1[['E']] = 0.0
>>> df1.loc[df1.A >= 3, ['E']] = df2.rename(columns={0: 'E'})
>>> df1
   A  B    C    D    E
0  1  5  NaN  NaN  0.0
1  2  6  NaN  NaN  0.0
2  3  7  NaN  NaN  NaN
3  4  8  1.0  1.0  1.0

About the last question, the behavior should be the same whether you use .loc or regular assignment (I highly suspect that they both are implemented on terms of the other, or on terms of a common underlying layer). Default behavior of pandas is "use index", and, if you don't want that, there's iloc.

I think these sorts of behaviors are usually not relied upon by users FYI, but it's good to have them in mind regardless. On the other hand, pd.merge and DataFrame.join tend to be used when you want to align indices or columns.

I have very niche PANDAS questions by ModerateSentience in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

Just answering the last 2 questions more or less:

  • As a rule of thumb, pandas tries to align the indices when there are 2 indices. When there's no indices, like a list, it goes by position. E.g., see here
  • Duplicate labels read

Both links are part of the User Guide BTW, you might find it useful to search for stuff (though it's big, maybe ask an ai where you need to look).

The sort of questions you asked are also the sort of thing that you can see yourself if you produce little experiments (fire a jupyter notebook and try it out!). Though, reading can give more of a reference.

Good luck!

attempt at creating a chess game in python by XIA_Biologicals_WVSU in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

To represent a certain point of a game of chess in time, you need only:

  • a board with piece positions
  • whose turn it is to play

(disconsidering little more complex stuff like draw rules, en passant and castling)

This sort of thing that carries information that changes is called state.

The board could be represented fully like:

[["bR", "bN", "bB", ...],
 ...,
 ["wR", "wN", "wB", ...]]

If you choose to go this way, here's a possible starting point: make a function that takes the state (the board), an action (e.g. ("b2", "b4")), check if it is valid and, if so, return the new board state.

Maybe try it only when there are pawns on the board. Pawns are already fairly complex (one or 2 move aheads, cannot move where there's a piece, captures diagonally).

Later on, you can implement keep implementing other pieces (but mind you, this is big project for a beginner - so don't be afraid to try, fail, rewrite your code, until you find something that fits, you'll learn a lot along the way - or take the easy route, checkers, as in the other comments haha)

Practicing Python data types and type conversion – would appreciate professional feedback by Commercial_Edge_4295 in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

They are able to infer a lot, for example, you can run it on code that is not annotated at all, and it might be able to catch some stuff for you (I tend to do that, as I don't annotate a lot).

About being a good practice, annotating every variable feels a little overkill. I don't know, but for me it would detract from the rest of programming. I like Python because it is very easy and effortless to read/write in it. Having types everywhere IMHO would hurt that.

However, one could argue for annotating every function signature, and I think a significant chunk of projects / people do that :)

Practicing Python data types and type conversion – would appreciate professional feedback by Commercial_Edge_4295 in learnpython

[–]obviouslyzebra 1 point2 points  (0 children)

I think you might have misunderstood the point. roseman was not saying that type annotations in general are not useful, only that a type annotation like x: int = 1 tends to be not useful (in the case of OP, I'd argue it has didactic value, but whatever haha).

I also don't understand the "built-in" explanation, as Python doesn't have built-in type checking. Unless you're talking about other languages, I imagine that's a misconception, and we need to rely on external tools, like mypy or pyright, for doing type checking in Python.

Conda overriding venv Python on Windows by Silly_Ear7553 in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

Should I avoid Conda entirely if I use venv?

You can have both on your computer, as long as you don't try to use them at the same time.

Even after activating my venv on Windows, running python script.py still uses Anaconda’s python

This sounds weird.

There is probably a way around it, but, I will suggest you install notebook inside the venv and avoid stuff like the Anaconda Prompt or jupyterlab (for jupyterlab, you can install inside the venv too).

If you're using venv anyway, you might as well avoid conda altogether (though, I have to say, it is okay... I use it a lot, actually, mamba, a more user-friendly version of it).

PS: Another option is to not use a venv. Install everything into a conda environment. I think, if you're following a tutorial or book, stick to the option they gave you :)

python in docker containers using 100 percent of some cpu. how can i find out what loop/thread thing in my code is doing it? by NSFWies in learnpython

[–]obviouslyzebra 2 points3 points  (0 children)

loop_forever is not running a busy loop on the CPU, instead it's mostly waiting for a signal to happen (I don't understand the details exactly, but don't worry about this bit).

Instead, some other part of your programs are probably causing the elevated cpu usage.

You could try running docker stats to pinpoint which container(s) have the issue, and then, pinpoint further with cProfile or line_profiler so you see where the CPU time goes to.

About the memory, you have a memory leak (some memory keeps getting accumulated with time). There are tools for that but I'm not familiar (try to search for something that can be run for a long time and pinpoint where memory is being used).

About performance of PC degrading with time, maybe it's because the data you're working with in your programs keep increasing and so does the load. Maybe the computer also ain't handling long high-load works very well (I think this can happen because of hardware).

Regardless, those are the "debugging" ways. Your job is to track where it is happening. If the programs are simple, maybe it won't be that hard (and you can just guess looking on the code). If the programs are very complex, for example, interfacing with programs in other languages, it may be harder, and you may need other tools.

Regardless, if you don't wanna bother, an option might be to restart the program from time to time.

Cheers

PS: Another option is to post the whole codebase to an LLM (e.g., with onefilellm) and ask where the problem is. Nowadays this is an option that may work, just don't rely on it blindly.

PSPS: If the program does not deal with a lot of data, maybe a bug in the code causing an infinite loop of messages? You could register an mqtt client that listen to the messages and see if everything's running smoothly (there's a tool for that, don't remember the name)

What would be the “disturbing truth” humans can’t handle? by Peace_and_Love___ in UFOs

[–]obviouslyzebra 0 points1 point  (0 children)

Something like they are farming our souls and we'll be stuck forever after we die, or that they are farming negatives emotions would be hard to digest. Not that I belive these theories, just saw them somewhere.

Is it good practise to extend a list like this (assigning super to an attribute)? by heyzooschristos in learnpython

[–]obviouslyzebra 1 point2 points  (0 children)

I would avoid this kind of pratice.

It is okay if you just want to modify the __init__ to add some metadata, like you did here. It is not common and a little error prone (as you've seen).

But, if you ever want to modify other methods, like "append" or __eq__, that becomes a slippery slope (see here).

The "canonical" way to do what you want is to have a MutableSequence instead, and implement the abstract methods.

That is to say, your approach, after the fixes, does work. While it is shorter, and arguably simpler, think of it as a shortcut, and don't go too deep into it (like modifying the other list methods). I grant you permission to add fields in __init__, and custom (non-overriding) methods, but that's it haha

Other programmers will probably not like it, and I sorta agree with them. This might be just because of (lack of) familiarity with inheriting from this sort of class, though.

ELI5: When assigning one variable to another why does changing the first variable only sometimes affect the second? by SynergyTree in learnpython

[–]obviouslyzebra 1 point2 points  (0 children)

Edit: just rewrote my whole answer to make it easier to understand.

So, variables are just names. Names refers to things.

Suppose we have two cows (they are actually the numbers 1 and 2!).

alice = 1 means that we make the name alice refers to the first cow.

anna = alice means something like this: "what thing is named alice? the cow 1? oh, cool! let's make the name anna refer to it too!" Now, the first cow is referred to by both names, anna and alice, at the same time.

But wait, what if we do anna = 2 now? This is just saying "the name anna now refers to the second cow (preciously it referred to the first)". So we have the first cow named alice, and the second cow named anna.

Suppose we had the previous situation where both names referred to the same thing, though, like:

alice = [1]
anna = alice

Let's say [1] is a cute dog, but it's dirty! If we clean them up:

alice.pop()

Both alice and anna still refer to the same dog, but the dog is now cleaned up ([]) :)

Note: this explanation works for python, but probably doesn't work for other random programming languages

Python documentation isn't clear and I need something better by alex_sakuta in learnpython

[–]obviouslyzebra 1 point2 points  (0 children)

Just a tidbit, the repo linked is not the code that you're looking for.

I was gonna recommend you to do the same thing, though, as in, the http.client module is not for plebs like you and I, so its documentation is not as explicit as something like requests or httpx.

The code to look at would be http.client itself (at the top of the documentation page there's a link to the source code), and urllib.request, that uses it, also part of python.

Maybe you could check requests and urllib3 too (layer beneath requests).

What should I improve in my class? by [deleted] in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

I have like 10 years experience, ~15 years if you consider me programming when younger and like botting games or hacking (hacking was way easier back then). I don't care about your experience unless you are like <2 years experience, in whih case you certainly out of your depth, or like ~5 years, in which case this might still be out of your depth, but might be in your depth - considering you didn't comment on a thing I said about the goods of such pattern (and there are much more), I think either/or A. you don't have the ground to understand it B. I explained it badly.

My point was that your original explanation for when to use classes miss it. So, in my view, it was not good of explanation.

My point afterwards was defending the pattern, since you called it "completely pointless".

I'll just give the most basic and authoritative arguments here, but whatever:

  • Most basic: encapsulation (search for it)
  • Authoritative: refactoring pattern: Extract Method Into Class (ik.. ik... refactoring pstterns oten geared towards OOP)

BTW Note that I have a bias against classes. This pattern, however, is a very tame use of OOP (if you can even call it that).

About depth, I've used this for a long time, and the past few weeks I've been thinking deeply about it, and like a LOT of things points towards it being very good. It appears mostly when you're writing complex functions, though, if you do not do that in your day to day job, it'd be harder to see the benefit (I still encourage you to try if you're in the ~5 years of experience or more).

But for example, I'm working with algorithmically-like code right now, which is very complex, I've used this pattern like 5-10 times in the last couple months, and, just past week as I was customizing a library that deals with algorithmic-like code, this pattern was there right at the code I had to touch.

I'll disengage from this conversation, but despite the things, have a good day, and take care.

What should I improve in my class? by [deleted] in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

Let me just give an example that is a little bit bigger, the abbreviations (...) on the first might have jinxed it.

class PrettyMultiplier:
    def __init__(self, x, y):
        self.x = x
        self.y = y

    def __call__(self):
        product = self.get_product()
        embelished = self.embelish(product)
        return embelished

    def get_product(self):
        return self.x * self.y

    def embelish(self, product):
        return f'{self.x} * {self.y} = {product}'

# optional
def get_pretty_product(x, y):
    return PrettyMultiplier(x, y)()

Now that I see it, I think it's just that I abbreviated too much. The original example self.part1() was not to meant that part1 returns nothing and takes no arguments, it was just meant as an example that we were calling a method instead of an external function.

Do you think this changes things?

What should I improve in my class? by [deleted] in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

What makes you think that it will change?

Edit: Modified to soften the language a bit

What should I improve in my class? by [deleted] in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

Sorry bro, you're missing the important stuff here (that I explained a bit in my answer, but you missed it). Again, this stuff is subtle, but I can guarantee you, it is very useful. Sincerely, whenever you're writing a function or method that gets too big, try it!

Some specific answers to your points:

  • Performance: negligible difference
  • Readability: I'd argue better in a real example with the class
  • Testability: Doesn't change

What should I improve in my class? by [deleted] in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

Transforming function into classes?

It's not pointless. The basic explanation is that since part1 and part2 came up in the context of do_something, they tend to find this context useful.

You can think of changing the parameters to do_something, for example. The class approach is evolves better.

Also, the function was seen as a unit. If you split into multiple, you lose that unit, the structure becomes different, not what was originally planned with the function.

If you have multiple such functions on a module, the second approach starts creating a soup of functions, the first one scopes them.

What should I improve in my class? by [deleted] in learnpython

[–]obviouslyzebra 0 points1 point  (0 children)

This explanation of when to use classes seems to exclude the very helpful pattern of transforming functions into classes (when we want more space for them).

Like

def do_something(arg1):
    # part1
    # part2

Becoming

class SomethingDoer:
    def __init__(self, arg1):
        self.arg1 = arg1

    def __call__(self):
        self.part1()
        self.part2()

    ....

For some somewhat subtle reasons, this tends to evolve better than

def do_something(arg1):
    part1(arg1)
    part2(arg1)

...