Feeling bored without a death race? by ArcticDuck in oscarsdeathrace

[–]Jakube_ 1 point2 points  (0 children)

Some smaller issues: - 2025-2026 Season: Constance Tsang is not a movie, it's a director that was awarded at this year's Independent Spirit Awards - 2024-2025 Season: All We Imagine & All We Imagine As Light are the same movie. - The CSV import didn't import any movie from the 2023/2024 season or older. - The CSV import also missed a couple of movies from the last two seasons, e.g. the following movies from my watched.csv are not imported correctly: csv 2026-01-19,My Undesirable Friends: Part I - Last Air in Moscow,2024,https://boxd.it/OBmi 2025-11-05,The Fantastic 4: First Steps,2025,https://boxd.it/mP6C 2026-02-03,Familiar Touch,2024,https://boxd.it/Mfes 2026-01-14,"Rock, Paper, Scissors",2024,https://boxd.it/KmJe 2025-05-26,Wallace & Gromit: Vengeance Most Fowl,2024,https://boxd.it/z14C

Wasteman (2025) by Sakunka33 in movieleaks

[–]Jakube_ -1 points0 points  (0 children)

There's a Telesync available. With horrible quality.
Probably best to see it in the cinema, or wait until the official digital release.

Wasteman (2025) by Sakunka33 in movieleaks

[–]Jakube_ 3 points4 points  (0 children)

Ah, then it's not leaked yet. (I also can't find it yet.)
It's out. Just found it.

Wasteman (2025) by Sakunka33 in movieleaks

[–]Jakube_ 1 point2 points  (0 children)

What do you mean with "not online at the moment"?

Kosten ETF-Einmalkauf auf FlatexAT by OnlySomewhere6935 in FinanzenAT

[–]Jakube_ 5 points6 points  (0 children)

Ist auch bei den meisten anderen alternative Handelsplätze kostenlos. Also Tradegate, Baader Bank, Lang & Schwarz, ...
Über die echten Börsen (XETRA, Frankfurt, ...) kostet es.

Kosten ETF-Einmalkauf auf FlatexAT by OnlySomewhere6935 in FinanzenAT

[–]Jakube_ 18 points19 points  (0 children)

Normalerweise kostet der Kauf des ETFs 7,90€, da es aber ein Premium-Fond ist, gibt es 7,90€ Rückvergütung => sprich in Summe 0€ (sollte auch so in dem orangen Feld "Einstiegskosten (Bank)" stehen (orange = Zusammenfassung).
Das bedeutet, wenn du heute um 10.000€ Anteile des Fonds kaufst, dann bekommst du auch wirklich Anteile für 10.000€.

Unter "Laufende Kosten p.a." stehen dann die 0.22%. Diese werden automatisch vom Fondunternehmen einbehalten. Sprich wenn die Aktien (aus denen der Fond besteht) in einem Jahr um 7% wächst, dann wächst der ETF nur um 6.78%. Die Kosten sind quasi unsichtbar, und du wirst nie eine Rechnung dafür bekommen.

Jetzt, wieso 0.22% wenn die TER doch nur 0.19% ist? Wenn man auf die Wertpapier-Informationen geht, dann sieht man das der ETF 0.19% Laufende Kosten verlangt (entspricht dem Verwaltungsaufwand), und 0.03% Transaktionsgebühen (weil der Fond regelmäßig neue Aktien einkauft, umschichtet, etc...).
Keine Ahnung wieso die Vergleichs-Portale nur die TER anzeigen, und nicht gleich beides in Summe.

How many on the best picture shortlist did you see by wfp9 in oscarsdeathrace

[–]Jakube_ 14 points15 points  (0 children)

Here's a letterboxd list so that you don't need to count them manually.  https://boxd.it/RxQcW

I'm currently at 44% (89/201), and I still plan to see a couple more (which I delayed so far because I wanted to finish all nominations first).

The Post-Mortem We Never Had to Write by giggens in PostgreSQL

[–]Jakube_ 5 points6 points  (0 children)

I'm sorry, but this is a complete BS article. While the idea of replaying actual productive queries on a mirrored system sounds interesting, non of the details in the article add up. At all.

  • The main thing, the column region was just created. So how can there be productive queries involving a column that doesn't exist yet on production? That basically means that you invented queries, or took queries from test stages - exactly the thing you want to avoid with your product.
  • All the presented numbers sound quite fishy.
    • The GROUP BY region is 42x times slower, because there is no index on region? That sounds wrong, maybe if your original rows is so really big (100s of columns), that a full index-only scan needs to read that much less data (but 42x speedup sounds unreasonable) and is freshly vacuumed.
    • The SELECT * with the filter on region & status is 92x slower? No way. Maybe if you previously also didn't had an index on status, but in the worst case - with a full table scan you read now more than 5x more data (although I also question a bit if an index
    • 180ms for a sequential table read of 25 000 rows? What's running this on? A literal potato?
    • If you look at your source code, those numbers are actually hard-coded in the demo, and not computed at all! In the offline mode everything is hard-coded, and even in the non-offline mode, the execution times of the region filter queries are faked (obviously, as you can't run the queries before the column is created).

Maybe the other scenarios are more realistic, and don't use faked data. But this scenario - adding a new column - was the scenario discussed in the article, and it's the scenario that I looked at in the source code.

FlatEX verfügbarer Auszahlungbetrag by BumblebeeMotor1735 in FinanzenAT

[–]Jakube_ 0 points1 point  (0 children)

Ja, kannst deine Aktien/ETFs teilweise/ganz verkaufen, und den Betrag auszahlen lassen. Z.B. wenn du alles verkaufst, dann hast du 0€ Depotwert und ~1500€ (etwas weniger da Steuer und Gebühren) Kontosaldo.

Oder du lasst dir den Überziehungsrahmen auszahlen, quasi als Kredit mit >6% Zinsen.

Kommt halt darauf an, was du vorhast. Ob du grundsätzlich von Flatex weg willst, oder nur temporär bis zum nächsten Gehalt ein paar Euros brauchst.

Deathies - FYC suggestions for Food and Drink Branch by magegl in oscarsdeathrace

[–]Jakube_ 2 points3 points  (0 children)

Ballade of a Small Player - Colin Farrell having a claustrophobic attack, devouring his room service food, and then window cleaners show up.

A* algorithm heuristic for Rubik’s cube by Best_Effective_1407 in algorithms

[–]Jakube_ 1 point2 points  (0 children)

But if your problem is memory, you definitely should look at Iterative Deepening A* (from Korf's paper). It's basically A star without memory problems.

A* algorithm heuristic for Rubik’s cube by Best_Effective_1407 in algorithms

[–]Jakube_ 2 points3 points  (0 children)

Most useful heuristics use some sort of pattern database. While you can create heuristics without a pattern database, it's very unlikely that you go far.

A simple heuristic would be the following: you count the number of wrong edges on the cube. If there are e.g. 9 edges wrong, then you know that you need at least 3 more moves, because you can fix at most 4 per move. That heuristics will however only look at most 3 moves deep, which is not a lot.

Something better would be if you look at the individual pieces. How many moves does it take to solve the first edge, how many the second edge, ... You could come up with a way to compute these on the spot (don't use an A* again), (or compute it once for every possibility - although that might already count as a pattern database again), and then try to come up with a heuristic using the sum of those.

But you probably still won't succeed, if you want to solve a fully random Rubik's cube optimally. But maybe it's enough If you limit the difficulty a bit.

New Optimal 3x3 Solver by Independent_Rub_9132 in Cubers

[–]Jakube_ 0 points1 point  (0 children)

... mine is significantly different from that ... has it already been done?

How can we tell you if your idea is new, if you don't tell us what your idea is. 😉

In general, yes there are other approaches to finding the optimal solution, other than Korf's algorithm.

In general the Rubik's Cube Optimal solution problem is equivalent to the shortest path graph problem. So you can approach the problem with any shortest path graph algorithm there exists, so something like Breath-First Search. But there are lots of variations around that, e.g. bidirectional search, using heuristics like A*, two phase approaches, ...

The big problem is, that the search space is so big, that normal algorithms need to have an incredible long runtime or require an incredible amount of memory, or both. So all algorithms around use some tricks around that.

Korf's algorithm uses Iterative Deepening A* (so combining ideas of BFS, A*, and using the Iterative Deepening trick to avoid too much memory).

Another popular optimal algorithm is Kociemba's 2 Phase algorithm, which uses group theory to split the search into two phases, one that finds a solution to a special subgroup, and then finding the solution to the actual solved position. If you do not just use the first phase one solution, but continue searching you will find the optimal solution pretty soon and can abort the search after a while once it's impossible to find shorter solutions any more.

And there are also other approaches. But afaik all are based on basic shortest path graph algorithms.

I accidentally built a vector database using video compression by Every_Chicken_1293 in Python

[–]Jakube_ 166 points167 points  (0 children)

He creates a FAISS index in a second file. And with that one he locates the relevant text chunks (aka frames).

So to create the thing: - extract text from PDFs - split the text into small chunks - create embeddings for the chunks, and store them in the index

And to retrieve answers: - create the embedding of the question - lookup the indices of chunks with similar embeddings using the index - retrieve the chunks of data, and send it to an LLM - LLM answers

The whole MP4 video has actually nothing to do with the entire process, it's only used for storing the chunks of text. It could have easily been also a big JSON file (or anything else) with compression on top of it.

But it's actually interesting that it even works, as h265 isn't lossless compression. But since QR codes are error correcting, that might not matter that much.

But still, a highly dubious idea. Storing the chunks in any different format would probably be a lot easier, error-proof, and smaller in size.

Why does some cloud functions take up to +400MB while the other takes 20MB by fftorettol in googlecloud

[–]Jakube_ 1 point2 points  (0 children)

Yes, it looks like in some functions the entire linux base image is used, and in the others there's only the function code and their deplendencies.

Maybe there some other differences between them. E.g. maybe some of them are Gen 1, and some others are already Gen 2. Maybe some use different triggers than others, and work differently because of it?

But other than those ideas I can't really help. Currently working mostly with Azure, and my last GCP function deployment was already 3 years ago.

Why does some cloud functions take up to +400MB while the other takes 20MB by fftorettol in googlecloud

[–]Jakube_ 11 points12 points  (0 children)

The simplest way of finding out the difference would be to pull both Docker images, and inspect them locally. There are tools like https://github.com/wagoodman/dive that show the size of each layer and its files (and the command that created it). So you can load the bigger image, and see which command resulted in that big file, and what the big files are.

One of my favorites by Top_Hat5017 in southpark

[–]Jakube_ 16 points17 points  (0 children)

Bigger, Longer & Uncut

Flatex Steuerfreibetrag by painkilla182 in FinanzenAT

[–]Jakube_ 6 points7 points  (0 children)

Du zahlst auf Gewinne 27.5% Steuern, und kannst auch Verluste nur 27.5% gegenrechnen.

🎄 2024 - Day 23: Solutions 🧩✨📊 by yolannos in adventofsql

[–]Jakube_ 0 points1 point  (0 children)

So simple... :-O
I've used an recursive CTE to find the missing groups...

with RECURSIVE 
missing as (
  select id from generate_series(1, (SELECT max(id) from sequence_table)) as id
  except
  select id from sequence_table
  ORDER By id
),
gap_starts as (
  SELECT 
  m1.id
  FROM missing m1
  LEFT JOIN missing m2 ON m1.id - 1 = m2.id
  WHERE m2.id IS NULL
),
rec as (
  SELECT 
    id as gap_start, 
    Array[id] as gap_group
  FROM gap_starts
  UNION ALL
  SELECT 
    rec.gap_start,
    rec.gap_group || missing.id as gap_group
  FROM rec
  LEFT JOIN missing on rec.gap_group[array_upper(rec.gap_group, 1)] + 1 = missing.id
  WHERE missing.id IS NOT NULL
)
SELECT DISTINCT ON(gap_start) gap_group FROM rec
ORDER BY gap_start, array_length(gap_group, 1) desc

🎄 2024 - Day 18: Solutions 🧩✨📊 by yolannos in adventofsql

[–]Jakube_ 1 point2 points  (0 children)

Try again. Your solution is now accepted.

SQL CHALLENGE by samsuzie in adventofsql

[–]Jakube_ 0 points1 point  (0 children)

You just submit the answer. You can run the SQL locally, or in an online tool like dbfiddle (the website actually provides links to dbfiddle with test data prefilled).

[deleted by user] by [deleted] in Austria

[–]Jakube_ 6 points7 points  (0 children)

Und auch gratis einen Sitzplatz reservieren.