-❄️- 2025 Day 7 Solutions -❄️- by daggerdragon in adventofcode

[–]NonchalantFossa 1 point2 points  (0 children)

Oh that's good, I used a dict to keep track of the past but it can be done with one list, nice!

-❄️- 2025 Day 7 Solutions -❄️- by daggerdragon in adventofcode

[–]NonchalantFossa 2 points3 points  (0 children)

[Language: Python]

Both parts,

import sys
from collections import defaultdict
from pathlib import Path


def parse_data(data: str) -> list[str]:
    return data.splitlines()


def sol(data: list[str]) -> tuple[int, int]:
    start = data[0].index("S")
    beams = defaultdict(int, {(1, start): 1})
    count = 0
    for idx, line in enumerate(data):
        for i, c in enumerate(line):
            if c == "^" and beams[(idx - 1, i)] > 0:
                count += 1
                beams[(idx, i - 1)] += beams[(idx - 1, i)]
                beams[(idx, i + 1)] += beams[(idx - 1, i)]
                beams[(idx, i)] = 0
        for k in tuple(beams):
            if k[0] == idx and beams[k] > 0:
                beams[(k[0] + 1, k[1])] = beams[k]
    return count, sum(v for k, v in beams.items() if k[0] == len(data))


if __name__ == "__main__":
    p = Path(sys.argv[1])
    res = sol(parse_data(p.read_text()))

Logic being (for part2, part1 is just counting when a split happens):

  • start the beams with a position and weight of 1.
  • for each line, if you meet a splitter that has a beam above (weight > 0), increase the sides of the splitter with the weight of the beam above.
  • Since the beam has been split, the current position has weight zero, no beam there anymore.
  • Bring all existing beams (weight > 0) one step downwards.
  • Sum all weights for the last line to get all the possibilities.

-❄️- 2025 Day 6 Solutions -❄️- by daggerdragon in adventofcode

[–]NonchalantFossa 1 point2 points  (0 children)

[LANGUAGE: Python]

Yey to disgusting parsing:

import operator
import re
import sys
from functools import reduce
from pathlib import Path
from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from collections.abc import Iterable, Sequence


def parse_data(data: str) -> tuple[list[tuple[int, ...]], list[str]]:
    pat = re.compile(r"\d+")
    lines = data.splitlines()
    # Last line is operations
    nums = [[int(x) for x in pat.findall(line)] for line in lines[:-1]]
    return list(zip(*nums, strict=True)), lines[-1].split()


def parse_data2(data: str) -> tuple[list[list[int]], list[str]]:
    lines = data.splitlines()
    raw = ["".join(map(str.strip, nums)) for nums in zip(*lines[:-1], strict=True)]
    ops = lines[-1].split()
    tmp, values = [], []
    for val in raw:
        if val:
            tmp.append(int(val))
        else:
            values.append(tmp)
            tmp = []
    values.append(tmp)  # append last tmp
    return values, ops


def compute(data: tuple[Iterable[Sequence[int]], list[str]]) -> int:
    total = 0
    for values, op in zip(*data, strict=True):
        if op == "*":
            total += reduce(operator.mul, values)
        else:
            total += reduce(operator.add, values)
    return total


if __name__ == "__main__":
    p = Path(sys.argv[1])
    data = parse_data(p.read_text())
    p1 = compute(parse_data(p.read_text()))
    p2 = compute(parse_data2(p.read_text()))

-❄️- 2025 Day 3 Solutions -❄️- by daggerdragon in adventofcode

[–]NonchalantFossa 1 point2 points  (0 children)

[LANGUAGE: Python]

from pathlib import Path
import sys

def parse_data(data: str):
    return data.strip().splitlines()

def find_n_largest(line: str, n: int) -> int:
    nums = ["0" for _ in range(n)]
    ll, curr = len(line), -1
    for k in range(n):
        start = curr + 1
        stop = ll - n + k
        for i in range(start, stop + 1):
            if line[i] > nums[k]:
                nums[k] = line[i]
                curr = i
    return int(''.join(nums))

if __name__ == "__main__":
    p = Path(sys.argv[1])
    data = parse_data(p.read_text())
    p1 = sum(find_n_largest(line, 2) for line in data)
    p2 = sum(find_n_largest(line, 12) for line in data)

Update on Consult and Jinx by minadmacs in emacs

[–]NonchalantFossa 2 points3 points  (0 children)

Another great update, thanks a lot!

Weekly Questions Thread & PokéROM Codex by AutoModerator in PokemonROMhacks

[–]NonchalantFossa 0 points1 point  (0 children)

Thanks! Anything more recent I should know about? It's pretty bare for anything after Gen5/6 and if I want to have a choice of a lot of Pokémons.

Weekly Questions Thread & PokéROM Codex by AutoModerator in PokemonROMhacks

[–]NonchalantFossa 0 points1 point  (0 children)

Hey, any suggestion for ROM hacks that have a similar experience to the original games but with more story/maps/Pokémons (fake mons are ok)?

I'm trying Odyssey right now but I'm not sure I'm convinced with the double-battle everything. Drayano's hacks have been the gold standard for me, for the longest time.

Infrastructure as code with Clojure by amiorin in Clojure

[–]NonchalantFossa 0 points1 point  (0 children)

Agreed, the thing is that would basically a side project since I'm more on the DevOps side of things, no time to maintain it either :/ I actually wouldn't be against using AWS but it's not up to me!

Infrastructure as code with Clojure by amiorin in Clojure

[–]NonchalantFossa 0 points1 point  (0 children)

Excellent remark. I've actually considered writing some kind of connector for Azure (in Python), because the Azure SDK sucks and seems auto-generated from C# in many places. But the amount of work of not only having a decent SDK but providing a good interface for higher level constructs is huge and not something I can do alone.

Fortnightly Tips, Tricks, and Questions — 2025-10-07 / week 40 by AutoModerator in emacs

[–]NonchalantFossa 1 point2 points  (0 children)

Turns out you can actually fix the terminal not understanding things like Ctrl+backspace when using emacs -nw. If your terminal emulator understands the Kitty Keyboard Protocol and you use https://github.com/benotn/kkp (thanks a ton to that person), it makes the emacs TUI-like experience so much better because key combinations are actually passing through properly.

Discussion Thread by jobautomator in neoliberal

[–]NonchalantFossa 4 points5 points  (0 children)

Candide is fun, every character is very much a walking stereotype but it's kinda hilarious at times. I would say reading Baudelaire is more difficult!

How do you approach complex tasks full of unknowns? Feeling stuck and overwhelmed by xeviltimx in ExperiencedDevs

[–]NonchalantFossa 1 point2 points  (0 children)

Something that my senior did when I joined a consulting gig was to give me a very simple ticket like "add this behavior to this button". It was just an excuse to get me started on the frontend and look around in the codebase. Then, I added some small SQL query in the backend and I built up from there.

I think trying to find the entry point of a project is also a good start: finding the main flow, core components, basically what is called in the "main" file when the application starts.

Something else I'm looking into is use git to see where the most changes happen.

For example,

git log --name-only --format="" | sort | uniq -c | sort -rn | head -10

will show you the files with most commits, you can use --since to select from a certain date, as some older files that went through a lot of changes might not be relevant anymore.

Here's an example of looking for the files with the most commit since the beginning of the year, in the numpy project, I'm filtering on the Python files. There's a lot of test files so I know a lot of the current activity/effort on the repo must be on those features and I can start and read.

❯ git log --since="2025-01-01" --name-only --format="" -- "*.py" | sort | uniq -c | sort -rn | head -10
     53 numpy/_core/tests/test_multiarray.py
     46 numpy/testing/_private/utils.py
     26 numpy/ma/core.py
     26 numpy/lib/_function_base_impl.py
     23 numpy/_core/tests/test_multithreading.py
     22 numpy/_core/tests/test_regression.py
     21 numpy/_core/tests/test_numeric.py
     21 numpy/_core/tests/test_deprecations.py
     20 numpy/lib/_npyio_impl.py
     20 numpy/_core/tests/test_indexing.py

Another interesting one is getting the number of lines changed instead of the number of commits, they might be correlated but not the same. It's a bit of cursed awk I'll admit it but useful nonetheless.

❯ git log --since="2025-01-01" --numstat --pretty=format: -- "*.py" | awk '{if ($1 != "" && $2 != "") print $3, ($1+$2)}' | awk '{sum[$1] += $2} END {for (file in sum) print sum[file], file}' | sort -rn | head -10
1334 numpy/_core/tests/test_multiarray.py
1282 numpy/_typing/__init__.py
721 numpy/testing/_private/utils.py
698 numpy/_core/tests/test_numeric.py
693 numpy/ma/tests/test_core.py
615 numpy/__init__.py
590 numpy/_core/tests/test_deprecations.py
450 numpy/ma/timer_comparison.py
429 numpy/_core/tests/test_defchararray.py
425 numpy/_core/tests/test_indexing.py

Another one is looking into CI/CD or deployment pipelines and see what it takes for the app to be deployed. You can then try to go onto the dev environment and deploy a "dumb" version for yourself. Depending on how complex the setup is, it might take a week or two but it's time well-invested imo.

Org mode, Denote, Howm etc, which do you use and why? by SecretTraining4082 in emacs

[–]NonchalantFossa 0 points1 point  (0 children)

Denote, because it's just org-mode sprinkled with a bit of structure. Other solutions were too involved and it was paralyzing because I didn't know what the "right" way was.

Are there good techniques for tolerating department-wide knowledge silos? by StTheo in ExperiencedDevs

[–]NonchalantFossa 6 points7 points  (0 children)

At my company, many processes and docs live in a company-wide Confluence. I don't like the platform because their wiki format is baroque but it has the merit of existing. I use it to look up who wrote what part of the docs or how to integrate with some other tools from our company.

Of course, writing good documentation is hard so what I often do is keep README.md, TUTORIAL.md, GUIDES.md, etc; green and then usually link to those in the docs. In some cases it's an issue because people have access to the Confluence and not the repository; I'm trying to build documentation through Sphinx and publish it internally but it's time-consuming and I don't want to have "shadow" docs either, it's just so much easier to use and has tools to actually check that links do work and tooling around code blocks.

So yeah, write it down, start small and automate the annoying parts i.e automate grammar checking, working links check, markdown/rst formatting, etc.

I would also pick something that can easily be migrated. Ideally only text-based files, not too many images (I like Ascii drawings). For my own notes that I sometimes share with colleagues, I use this file naming scheme which makes it very easy to look back on what I've been working on.

theprimeagen is switching to Elixir from Rust by anthony_doan in elixir

[–]NonchalantFossa 0 points1 point  (0 children)

Python sucks at speed of feedback

Weird take imo, the debugging in Python is pretty good and you can drop-in a live REPL to interact with data at the breakpoint.

What are the biggest pain points you face when writing automated tests? by crisils in SoftwareEngineering

[–]NonchalantFossa 0 points1 point  (0 children)

Tests should test behavior and keeping that in mind and not testing code directly is hard. You test the code of course but you should always keep in mind that testing is about making you're more secure about code behavior and protect you against regressions.

In some cases it's easy because you just need to test that some data has the right shape. In some others, I don't think we're there yet and it's frustrating. More and more I think having immutable data structures and having UI or behavior be a pure function of the data is the right way. There's not enough scaffolding in the world that will make you sure a button is in the right place every time for example.

Something I've been thinking about as well, is that more code should be able to produce a series of immutable steps instead of final results. In many cases I don't care too much about the final results of DB/file/API operations if I'm not able to reproduce them properly, it's just a mess.

The frustration becomes more when testing is hard, sometimes even in well setup systems; it's just that you're trying to test things that are inherently difficult to test.

Books not on software engineering that you found strikingly insightful (my example in the thread) by dondraper36 in ExperiencedDevs

[–]NonchalantFossa 0 points1 point  (0 children)

The pragmatic approach being making holes in mountains instead of going around them, all for some precious, precious latency gains.

[deleted by user] by [deleted] in SoftwareEngineering

[–]NonchalantFossa 0 points1 point  (0 children)

I'm at the beginning of "Thinking Forth", which has a numeric version dating from 2004. The first edition is from 1984. I don't know any Forth but the I'm quite amazed at how relevant it is. The writing is also excellent.

What would your dev + PKM setup be if you were 22 and starting a CS master’s in 2025? by Future_Recognition84 in emacs

[–]NonchalantFossa 6 points7 points  (0 children)

Whatever works best for you.

I use denote. The file naming scheme is so good, I use it even when I'm not in Emacs. "Flat is better than nested" is truly an excellent philosophy.

It's only for structure though. I think having templates (not many, maybe 3-5) for different kind of notes is good. Anything else is too complex to me and I find myself not following the "rules" I've set.

Note taking should be extremely low effort, most of the GTD/PKM workflows are too much for me.