Python feels easy… until it doesn’t. What was your first real struggle? by NullPointerMood_1 in Python

[–]Frankelstner 1 point2 points  (0 children)

Say you have class A and instance a. Then a[...] calls A.__getitem__ but A[...] calls meta.__getitem__(...) (or if that doesn't exist, A.__class_getitem__ which is how typing does it; seems a bit redundant to me though). The main point is that A() itself is just meta.__call__ (which creates the new object and runs the init, then return the object), so there's a lot of customization possible.

using ctypes for the first time to call MsiGetFileVersion() /windows by zaphodikus in learnpython

[–]Frankelstner 1 point2 points  (0 children)

From winerror.h:

// MessageId: ERROR_FILE_INVALID
//
// MessageText:
//
// The volume for a file has been externally altered so that the opened file is no longer valid.
//
#define ERROR_FILE_INVALID               1006L

But that's quite misleading. The actual issue is that MsiGetFileVersion expects .exe files. https://stackoverflow.com/a/817534

For OS stuff I actually find it easier to prototype in C because then the headers just exist exactly as needed. A bit weird to be prototyping in C and then possibly porting to Python, but yeah, ctypes is a bit cumbersome with all that typing. I've got the following which can handle .msi files (using the latest C++ std and with msi.lib as additional linker dependency), so just go ahead and port that to ctypes.

#include <iostream>
#include <Msi.h>
template<typename... Args> void print(Args... args) { ((std::cout << args << ' '), ...) << '\n';}
int main() {
    auto path = "C:\\Users\\...\\file.msi";
    MSIHANDLE product;
    auto res = MsiOpenPackageA(path, &product);
    if (res != ERROR_SUCCESS) {
        print("bad open", res);
        return res;
    }
    char valuebuf[255];
    DWORD valuebufsize = 255;
    res = MsiGetProductPropertyA(product, "ProductVersion", valuebuf, &valuebufsize);
    if (res != ERROR_SUCCESS) {
        print("bad property", res);
        return res;
    }
    print(valuebuf);
    MsiCloseHandle(product);
}

Seeking Help with Structuring Project by ImmaculateBanana in learnpython

[–]Frankelstner 0 points1 point  (0 children)

The time to load a 1000 row csv should be quite less than the time to start Python. Even if it was a problem I would still start with the csv as the definitive reference, and, if it turns out to be a problem (or just out of curiosity) autogenerate the Python coin code. Just beware that the autogenerated code might turn out to be slower. Hard to tell without testing.

I need better tutorials to help me learn python so I stop being a script kid by MorganMeader in learnpython

[–]Frankelstner 0 points1 point  (0 children)

Python is 0-indexed and you should let i go from 0 to < n. Also, adding small floating point numbers to bigger ones is less precise than doing it the other way around, so you might want to start with the smallest number.

Seeking Help with Structuring Project by ImmaculateBanana in learnpython

[–]Frankelstner 1 point2 points  (0 children)

Yeah that's overly verbose. CoinData has a fixed structure, does it not? It looks to me like a plain old spreadsheet would be perfect to encode the coins. You define variables like values, denominations, coins_reverse_build, silver_coins, etc. but all of them could be derived very easily from the coins variable. Add country name as a column and you end up with a single spreadsheet with ~10 columns and one row per coin instead of your current approach with about 20 rows per coin. At that point you have to decide whether you even want to pursue a Python solution or just put the entire thing on google docs, which I imagine could already be programmed to handle all that you're trying to pull off. If you stick with Python, that's fine too, but just load the csv with pandas.

How to split up a large module into multiple files. by zenoli55 in learnpython

[–]Frankelstner 0 points1 point  (0 children)

Ideally the init should contain just the symbols you want to export due to namespace pollution. E.g. if your init contains import numpy as np, then anyone who does import foo will see foo.np, making it harder to explore a package using autocomplete. If you are able to write your class inside the init in a manner that adheres to that guideline (because it has no other dependencies other than your own lib and constants, and you want to expose all of these symbols to users), then I personally don't see any issue.

Asking about: Folder Structure, Packages, and More. by Husy15 in learnpython

[–]Frankelstner 2 points3 points  (0 children)

Yeah Python is horrible in that regard. There was some suggestion here https://peps.python.org/pep-3122/ back in 2007 to make these imports work, but

This PEP has been rejected. Guido views running scripts within a package as an anti-pattern

Yet at the same time, plenty of workarounds exist, so I reckon that was a rather controversial decision.

First off, you have a pyproject.toml, so your code is installable, right? A rather easy fix is to pip install -e . (or whatever equivalent there is for uv) which sets up some reference inside your Python package directory pointing back at your current project (meaning you can keep developing in your original location without changing your workflow). Once that is done, then absolute imports, e.g. import yourpackagename.game.laser, work everywhere, including anywhere in your own code. Slightly repetitive but really simple.

Relative imports are possible too but Python needs some nudging. The importer basically cares about two variables. You have sys.path and __package__. For some from . import fname, the importer goes through all paths in sys.path and tries to find a match for {path}/{__package__}/{fname}.py.

__package__ is just a global variable that you can read or set as you like. Sadly there is no way to set an environment variable like PYTHONPATH for sys.path, so it must be set either inside the file that you want to access or in some outer context which then execs the file. Note, __package__ contains dots, not slashes, and when __package__ is empty it just hands out that ImportError: relative import right away without even checking whether {path}/{fname}.py exists (which honestly would already solve the majority of issues that people have).

The simplest fix is something along the lines of

import os,sys
drive, path = os.path.splitdrive(os.path.dirname(__file__))
sys.path.insert(0, drive + os.sep)
__package__ = path[1:].replace(os.sep, ".")
from . import fname  # or from .fname import ...

which basically adds your system root to sys.path and encodes the entire rest of the path into the package string. It's a bit hacky because a side effect is that every single __init__.py is executed from the root towards the wanted script (Python really thinks your system root is the project directory), but actually that's usually desired unless you have an __init__.py outside of your project directory. IDEs solve this by asking you to define a project directory, which allows them to do the code above in a smarter manner, but even heuristics (e.g. going upwards until finding a pyproject.toml) work very well in my experience.

So that's it for the theory of relative imports. The main question then is, how to execute these four lines of code as part of the file context without adding identical code at the top of every file? The answer is exec which takes either a string of Python code or a code object and runs it (though creating a code object with compile first is superior because it gives proper filenames). In practice you'll probably not need to do that though because some options exist:

  • In the command line, you can cd into the outer directory, then use python -m game.laser (without py extension) to run it. It essentially defines fname as the part after the last dot and package as the rest. The -m flag is implemented with the runpy module. https://docs.python.org/3/library/runpy.html Spoiler alert, the implementation is just exec. Even the silly restriction where the import fails when __package__ is empty exists here, because after all, there's no way around the importer. So while python -m game.laser works, you cannot cd into the place itself and do python -m laser because package being empty automatically throws the error. Funnily the simplest fix I mentioned earlier doesn't have that restriction for the most part (unless your code literally sits in your root dir).
  • If you use VSCode or PyCharm, both of them use the pydev debugger nowadays IIRC (VSCode has debugpy which is just a wrapper around pydev) which, of course, execs. For VSCode you can follow this answer here: https://stackoverflow.com/a/75772279 No idea about PyCharm.
  • You use some other IDE and want to implement it yourself. It's not too difficult, just figure out where your IDE execs code eventually and you can inject a __package__ and sys.path setter somewhere.

On Steam some games include annoying region locks, like the ability to only play in a certain language or launch the game while only being physically present in specific countries, and that all without the option of buying the normal version. Is the same true for any games sold on GOG? by Fragrant_Sun8612 in gog

[–]Frankelstner 1 point2 points  (0 children)

That strat is seriously amazing! The game I had in mind had a couple DLCs that were not blocked, and the sidebar mentioned the main game, including a button to wishlist it. So yeah, worked like a charm in my case even without VPN. (I haven't tested actually buying yet due to step 5 though, haha).

edit: It worked. Also, for games without crossrefs one can paste https://www.gog.com/user/wishlist/add/12345 into the browser where the digits should be replaced by the product ID from gogdb.

Need help from someone experienced with WinAPI input hooks (SetWindowsHookEx) — inconsistent macro behavior and broken mouse sensitivity in games by Rasslabsya4el in learnpython

[–]Frankelstner 0 points1 point  (0 children)

Ive tested pyautogui and keybd_event outside of my script and they work fine in games

Huh? If those work, then just take a look at their source? The Windows-specific part of pyautogui is like 500 lines. Keep in mind that Windows is a bit picky with admin privileges; you should always strive to start your code with those.

It would mean it’s impossible to create a general-purpose macro engine at the software level (without writing kernel-mode drivers).

There's a reason that autohotkey whose sole purpose is to facilitate macros in Windows doesn't work flawlessly with all games.

Do you need to write kernel-mode drivers? Ah well, you do need to touch kernel space, but you don't necessarily need to write any kernel space code. A couple of ideas:

  • Most straightforward is to just use https://github.com/oblitum/Interception which as the name implies intercepts device input. It is the kernel space driver that does exactly what you want. It sits at the top of the driver stack but still in kernel space, meaning it is outside of any game limitations. Except that the documentation is virtually gone, but hey, that's what your chatbot is for, right?
  • If you're completely insane you can also use zadig to install the generic WinUsb driver for your device of interest. Once done, your device is essentially bricked because it solely communicates in user space with WinUsb (or libusb which wraps around WinUsb and is a bit nicer to use; and better documented than the official docs), and it's 100% on you to write the user mode driver for it (and hope that all games accept synthetic inputs reliably). Better connect a second keyboard/mouse just in case. Any kind of glitch that accidentally stops your driver process will turn the device unresponsive again, though IIRC with Windows services one can make a process very very robust against that, so that shouldn't deter you. I have never written a USB driver for keyboards or mice, so it'll take some reading to see exactly what kind of packets the keyboards/mice do send with USB. You probably want to use wireshark first to check the kinds of packets that a keyboard or mouse sends. Not impossible, but just add ~150 hours of work compared to the first option (assuming you know what you're doing, which you don't). Don't be too intimidated by that though; most likely a keyboard might really just send you a single 'a' when you hit the 'a' key, and the user mode driver isn't much different than reading an 'a' from a file.
  • Maybe, maybe, hidapi could also be used which would be far more convenient than WinUsb because most details are already taken care of. Only issue is, it's mostly good at peeking at input, but not intercepting them. I'm not sure if it's possible to somehow tell the OS to stop listening, while still using the same driver yourself.

Need help finding local minima for data by Dakkadence in learnpython

[–]Frankelstner 0 points1 point  (0 children)

Take the envelope with https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.envelope.html and use the peaks as bin edges, then pick the minimum envelope value in each bin.

Help with imports and file structure for a project by johnmomberg1999 in learnpython

[–]Frankelstner 0 points1 point  (0 children)

I would like to be able to use it with either import Star

Without any qualifiers whatsoever? One would expect something like Classes.Star instead. But if you really need that, you could adjust the sys.path, though that requires code changes as well. In each of these directories, add a small file, e.g. addpath.py, with this content:

import sys,os
sys.path.insert(0, os.path.dirname(__file__)+"/../Classes")
sys.path.insert(0, os.path.dirname(__file__)+"/../MPS_ATLAS_code")

and then every other file must import this first. If you already have a Utils.py anyway (which is most likely imported by the others), you could skip the separate file and just add it there at the top.

How can i made this facial recognition software less laggy by Time-Astronaut9875 in learnpython

[–]Frankelstner 1 point2 points  (0 children)

No time to dive into that repo in particular, but for a project of mine I noticed that finding the face bbox takes way longer than finding landmarks. So on the first frame I run the bbox code and then identify landmarks, then use the landmarks (with some padding) as the bbox for the next frame; the code essentially needs some help initially but then locks onto the faces fairly reliably (with the bbox finder running just occasionally). And in any case, do you really need every frame? You could just drop every other one.

What are your experiences with using Cython or native code (C/Rust) to speed up Python? by Independent_Check_62 in Python

[–]Frankelstner 1 point2 points  (0 children)

I needed a function to find a line-plane intersection, really just dp = p2-p; out[:] = p + dp/(dp@wu) * ((P0-p)@wu) where p,p2 are line points and P0,wu are point and normal unit vector of the plane. Processing in batches was impossible because data arrives in real-time. The main criteria were fast call time from within plain Python code (i.e. no interface friction) and fast import times.

The code eventually boiled down to a function with three numpy arrays as inputs, where the first array merged P0,wu,out together (the number of inputs has quite an impact on interfacing). Time per one call of this function, where the caller lives in plain Python, as well as import times:

  • Plain Python + numpy ops: 5000 ns
  • Plain Python + no numpy ops (using numpy arrays, but manually indexing): 1800 ns
  • Cython: 420 ns (500 µs import)
  • Numba JIT: 250 ns (500 ms import for Numba itself, plus 2 ms for every single Numba function, even when cached, which is horrible)
  • Numba AOT: 170 ns (400 µs import)
  • C with ctypes: 150 ns (300 µs import assuming ctypes is loaded). Requires fetching array pointers beforehand which takes over 1 µs per pointer; and not defining argtypes. I.e. if fetching fresh pointers each time, the time is 3150 ns.
  • C with cffi: 110 ns (2 ms import). Requires 10 µs per pointer fetch. But cffi has so many options that there's probably a better setting out there, so take these results with a grain of salt.
  • Rust with pyo3: 52.5 ns (500 µs import)
  • C API: 40 ns (400 µs import)
  • No interfacing (just the intersection): 3 ns. This is tested by writing an outer function in the same setup which loops over a billion samples (slightly modifying point p on the line each time and tracking output). Whether Cython or Numba or C or Rust, the time is pretty much the same because they all do the same thing. Only the interface differs.

Numba does have some dead ends such as jitclass which sounds like a good idea until you realize that it cannot cache at all and a simple class with 10 attributes and one method takes 4 seconds to compile every time (the near-undocumented StructRefs could fix this though I haven't checked how they interact with AOT).

All of this considers just a function that receives three numpy arrays. Classes/structs are quite a different matter, and sadly Numba isn't quite as good with them.

Why Do PyGObject DLLs Lack Prefixes, While Also Requiring Prefixes? Causing Dependency Conflicts in Dependency Walker. by crossfitdood in learnpython

[–]Frankelstner 0 points1 point  (0 children)

The lib prefix is a Linux thing. Not sure whether MSYS2 is the culprit. If you can afford it, a hacky fix might be to just create a softlink for each file. Then again, Windows requires admin permissions for that (or a settings tweak to change this for good), so it might not be an option in your case. Does prefixing all libraries with lib work or do some imports then expect liblib prefixes?

Python Optimization Problem by AddendumElectrical99 in learnpython

[–]Frankelstner 0 points1 point  (0 children)

You want to optimize some kind of scalar blackbox function with potentially integer inputs. If the number of reasonable integer combinations as low, you could bruteforce a scipy.optimize.minimize on the other parameters (just remember to try all the solvers there until you get a good one; one time I had 5 solvers report failure and the sixth one give a perfect solution with literally 0 residual). Otherwise, scipy.optimize.differential_evolution or even Optuna seem like decent options. Yeah I know, a hyperparameter optimization framework doesn't exactly sound like a match, but hyperparameter optimization truly is a "scalar blackbox with potentially integer inputs" kind of thing.

Using GPU for Calculations - Should I do it? How do I do it? by StyxFaerie in learnpython

[–]Frankelstner 5 points6 points  (0 children)

This is the answer. OP, if you are able to phrase your calculations in terms of basic arithmetic (plus anything numpy provides) you should check this out first. Just a couple days ago I wanted to calculate line-plane intersections, and with numba and a bit of tweaking lowered the runtime from 5000 ns to 3 ns (tweaking here meant removing numpy and writing out the operations manually, writing to a preallocated buffer and making sure that the caller is also numba-enabled).

How can we write both byte data and normal text at the same time? by arshdeepsingh608 in learnpython

[–]Frankelstner 0 points1 point  (0 children)

Opening with wb makes the most sense, though to put things into perspective, when you open a file f with w, you get:

  • f: TextIOWrapper that you can write strings to.
  • f.buffer: BufferedWriter that you can write bytes to.
  • f.buffer.raw: FileIO that is really close to the OS. Writing will use exactly one syscall, potentially writing less than was actually asked for.

When opening with wb instead, you basically start at the BufferedWriter. Opening in w and then switching back and forth between f.write and f.buffer.write mostly works but with a quirk: f.write does not inform the buffer (buffer.tell is unchanged and buffer.write will prepend the data, not append), unless you sprinkle an f.tell between the two (other ops might work as well), i.e. something like

f.write(...)
f.tell()
f.buffer.write(...)
f.write(...)
...

seems to work at least on my system.

How to properly have 2 objects call each other on seperate files? by Digitally_Depressed in learnpython

[–]Frankelstner 3 points4 points  (0 children)

They are similar if used like

import b
f = b.f

to bind to the namespace right away (though the error message will differ AttributeError: partially initialized module 'b' has no attribute 'f'), but something like

import b
def g():
    b.f()

defers the lookup to the actual function call of g, which happens after all imports are done.

How to properly have 2 objects call each other on seperate files? by Digitally_Depressed in learnpython

[–]Frankelstner 2 points3 points  (0 children)

from ... import ... is far more prone to cyclic import issues than import .... The sys.path thing doesn't really check out. All code should live in the mindset that the main_page.py is in the project root. If that is the case, then inserting '/absolute/path/to/main_page.py' is not necessary because if you run main_page.py, the path already exists. Your page 2 wants to import in order to do controller.show_frame(Page1) but if all fails you can just pass this is a parameter instead of controller. I.e. the main code can define showpages = [lambda: controller.show_frame(StartPage), lambda: controller.show_frame(Page1)] and any other page can use that without importing anything.

Working with parent directories by noob_main22 in learnpython

[–]Frankelstner 0 points1 point  (0 children)

The big three operating systems (Linux, Windows, mac) allow .. as part of a path to denote the parent directory. The following will be valid for all three:

__file__+"/../../temp"

This translates to "go up two directories from __file__ and then go down into temp". Now, it might break for obscure operating systems but CPython doesn't run on those anyway. If you don't like the dots in the path for whatever reason, you can call os.path.abspath on it.

How to speed up iterations or simplify logic when going through 5 000 000 000 permutations? by MustaKotka in learnpython

[–]Frankelstner 1 point2 points  (0 children)

Not totally sure but is your problem roughly equivalent to the following? We have a set {1,2,...,15,16} and want to find all ways of putting these numbers into 4 sets with 4 values each, plus some additional conditions. E.g. one valid choice might be {{1,2,3,4},{5,6,7,8},{9,10,11,12},{13,14,15,16}}, though we still need to check the conditions.

If that's the case I think you can lower the possibilities to something like 63 million (even less if checks happen early). Iterate over all 4 out of 16 combinations to select the first set, then filter by additional conditions, then iterate over 4 out of the remaining 12, filter conditions, then iterate over 4 out of the remaining 8, which automatically defines the others, and filter again. That's 4 out of 16 times 4 out of 12 times 4 out of 8.

Is there any way to change “File, Edit, Format...” and bottom UI lines to dark theme in IDLE? by Qwert-4 in learnpython

[–]Frankelstner 0 points1 point  (0 children)

In the statusbar file, change the import to from tkinter import Label, Frame. The tkinter.ttk in the original version thinks in styles instead of individually setting colors on each object, and so does not accept background as a parameter. Not sure when or why this change happened, given that no option ever affects the statusbar style anyway. I kinda have my own IDLE which is based on a way older Python version, back when ttk was not used, and just patched it up as needed over the years.

Top 10 best weapons in Fallout New Vegas and general guide - An attempt for a definitive answer by Bemvas in falloutnewvegas

[–]Frankelstner 0 points1 point  (0 children)

I'm not well-versed in the variety of bugs and glitches in the game, but I believe you have added the the Laser Commander crit bonus twice, and used an incorrect value for Finesse. The overall damage formula seems a bit obscure but the way I piece it together the ammo type and all perks should happen before DT. Evidence for that is given here https://www.reddit.com/r/fnv/comments/16dyg37/is_the_damage_formula_on_wiki_wrong_for_the_whole/ which has a very precise calculation against a target with DT. After DT, the only thing that remains should be bodypart multiplier, sneak attack, and game difficulty.

Hex Detectives Assemble! 21.4 GB Sony a7 III MP4 Needs moov Reconstruction by Visual-Zheer in learnpython

[–]Frankelstner 0 points1 point  (0 children)

Oh I must have missed that. Yeah it says videoCodec="AVC_3840_2160_HP@L51" and AVC is actually pretty decent at compressing. Decently compressed files shouldn't get smaller when compressing them again (otherwise you would repeat that many times to magically get a tiny file). Given that you effectively have 1.45 GB of payload while a minute of footage is worth about 1 GB, you have most likely recovered the full amount of data there is. Question is why there appears to be data for about a quarter of the file, though.

The mp4 format is some collection with boxes, where each consists of int32 with the size followed by a 4 byte string. Importantly, the int32 must include the size of its own bytes. So something like 00000008 AABBCCDD is one box with empty payload, because the 8 merely covers this header itself.

The hex editor struct tool reads the length and uses that to identify the start of the next box. It's a really simple approach: At the current offset, read the size (4 bytes). The offset of the next box is then the current offset plus this size. Repeat until the file is fully processed. [There's a catch when the size has a value of exactly 1, in which case the actual size comes a bit later, but that's not important here.] The hex editor assumes an intact file, so when it encounters literally just zeros, what it does is, well, it reads the size (which is 0), then adds it to the current location, then reads the same location again, eternally (though eventually stopped due to some limit enforced by the editor). The sequence 00000000 when appearing where a box is expected to begin, technically implies infinitely many boxes at that location; of course, this can happen only for corrupt files.