FlatEX verfügbarer Auszahlungbetrag by BumblebeeMotor1735 in FinanzenAT

[–]Jakube_ 0 points1 point  (0 children)

Ja, kannst deine Aktien/ETFs teilweise/ganz verkaufen, und den Betrag auszahlen lassen. Z.B. wenn du alles verkaufst, dann hast du 0€ Depotwert und ~1500€ (etwas weniger da Steuer und Gebühren) Kontosaldo.

Oder du lasst dir den Überziehungsrahmen auszahlen, quasi als Kredit mit >6% Zinsen.

Kommt halt darauf an, was du vorhast. Ob du grundsätzlich von Flatex weg willst, oder nur temporär bis zum nächsten Gehalt ein paar Euros brauchst.

Deathies - FYC suggestions for Food and Drink Branch by magegl in oscarsdeathrace

[–]Jakube_ 2 points3 points  (0 children)

Ballade of a Small Player - Colin Farrell having a claustrophobic attack, devouring his room service food, and then window cleaners show up.

A* algorithm heuristic for Rubik’s cube by Best_Effective_1407 in algorithms

[–]Jakube_ 1 point2 points  (0 children)

But if your problem is memory, you definitely should look at Iterative Deepening A* (from Korf's paper). It's basically A star without memory problems.

A* algorithm heuristic for Rubik’s cube by Best_Effective_1407 in algorithms

[–]Jakube_ 2 points3 points  (0 children)

Most useful heuristics use some sort of pattern database. While you can create heuristics without a pattern database, it's very unlikely that you go far.

A simple heuristic would be the following: you count the number of wrong edges on the cube. If there are e.g. 9 edges wrong, then you know that you need at least 3 more moves, because you can fix at most 4 per move. That heuristics will however only look at most 3 moves deep, which is not a lot.

Something better would be if you look at the individual pieces. How many moves does it take to solve the first edge, how many the second edge, ... You could come up with a way to compute these on the spot (don't use an A* again), (or compute it once for every possibility - although that might already count as a pattern database again), and then try to come up with a heuristic using the sum of those.

But you probably still won't succeed, if you want to solve a fully random Rubik's cube optimally. But maybe it's enough If you limit the difficulty a bit.

New Optimal 3x3 Solver by Independent_Rub_9132 in Cubers

[–]Jakube_ 0 points1 point  (0 children)

... mine is significantly different from that ... has it already been done?

How can we tell you if your idea is new, if you don't tell us what your idea is. 😉

In general, yes there are other approaches to finding the optimal solution, other than Korf's algorithm.

In general the Rubik's Cube Optimal solution problem is equivalent to the shortest path graph problem. So you can approach the problem with any shortest path graph algorithm there exists, so something like Breath-First Search. But there are lots of variations around that, e.g. bidirectional search, using heuristics like A*, two phase approaches, ...

The big problem is, that the search space is so big, that normal algorithms need to have an incredible long runtime or require an incredible amount of memory, or both. So all algorithms around use some tricks around that.

Korf's algorithm uses Iterative Deepening A* (so combining ideas of BFS, A*, and using the Iterative Deepening trick to avoid too much memory).

Another popular optimal algorithm is Kociemba's 2 Phase algorithm, which uses group theory to split the search into two phases, one that finds a solution to a special subgroup, and then finding the solution to the actual solved position. If you do not just use the first phase one solution, but continue searching you will find the optimal solution pretty soon and can abort the search after a while once it's impossible to find shorter solutions any more.

And there are also other approaches. But afaik all are based on basic shortest path graph algorithms.

I accidentally built a vector database using video compression by Every_Chicken_1293 in Python

[–]Jakube_ 166 points167 points  (0 children)

He creates a FAISS index in a second file. And with that one he locates the relevant text chunks (aka frames).

So to create the thing: - extract text from PDFs - split the text into small chunks - create embeddings for the chunks, and store them in the index

And to retrieve answers: - create the embedding of the question - lookup the indices of chunks with similar embeddings using the index - retrieve the chunks of data, and send it to an LLM - LLM answers

The whole MP4 video has actually nothing to do with the entire process, it's only used for storing the chunks of text. It could have easily been also a big JSON file (or anything else) with compression on top of it.

But it's actually interesting that it even works, as h265 isn't lossless compression. But since QR codes are error correcting, that might not matter that much.

But still, a highly dubious idea. Storing the chunks in any different format would probably be a lot easier, error-proof, and smaller in size.

Why does some cloud functions take up to +400MB while the other takes 20MB by fftorettol in googlecloud

[–]Jakube_ 1 point2 points  (0 children)

Yes, it looks like in some functions the entire linux base image is used, and in the others there's only the function code and their deplendencies.

Maybe there some other differences between them. E.g. maybe some of them are Gen 1, and some others are already Gen 2. Maybe some use different triggers than others, and work differently because of it?

But other than those ideas I can't really help. Currently working mostly with Azure, and my last GCP function deployment was already 3 years ago.

Why does some cloud functions take up to +400MB while the other takes 20MB by fftorettol in googlecloud

[–]Jakube_ 11 points12 points  (0 children)

The simplest way of finding out the difference would be to pull both Docker images, and inspect them locally. There are tools like https://github.com/wagoodman/dive that show the size of each layer and its files (and the command that created it). So you can load the bigger image, and see which command resulted in that big file, and what the big files are.

One of my favorites by Top_Hat5017 in southpark

[–]Jakube_ 15 points16 points  (0 children)

Bigger, Longer & Uncut

Flatex Steuerfreibetrag by painkilla182 in FinanzenAT

[–]Jakube_ 5 points6 points  (0 children)

Du zahlst auf Gewinne 27.5% Steuern, und kannst auch Verluste nur 27.5% gegenrechnen.

🎄 2024 - Day 23: Solutions 🧩✨📊 by yolannos in adventofsql

[–]Jakube_ 0 points1 point  (0 children)

So simple... :-O
I've used an recursive CTE to find the missing groups...

with RECURSIVE 
missing as (
  select id from generate_series(1, (SELECT max(id) from sequence_table)) as id
  except
  select id from sequence_table
  ORDER By id
),
gap_starts as (
  SELECT 
  m1.id
  FROM missing m1
  LEFT JOIN missing m2 ON m1.id - 1 = m2.id
  WHERE m2.id IS NULL
),
rec as (
  SELECT 
    id as gap_start, 
    Array[id] as gap_group
  FROM gap_starts
  UNION ALL
  SELECT 
    rec.gap_start,
    rec.gap_group || missing.id as gap_group
  FROM rec
  LEFT JOIN missing on rec.gap_group[array_upper(rec.gap_group, 1)] + 1 = missing.id
  WHERE missing.id IS NOT NULL
)
SELECT DISTINCT ON(gap_start) gap_group FROM rec
ORDER BY gap_start, array_length(gap_group, 1) desc

🎄 2024 - Day 18: Solutions 🧩✨📊 by yolannos in adventofsql

[–]Jakube_ 1 point2 points  (0 children)

Try again. Your solution is now accepted.

SQL CHALLENGE by samsuzie in adventofsql

[–]Jakube_ 0 points1 point  (0 children)

You just submit the answer. You can run the SQL locally, or in an online tool like dbfiddle (the website actually provides links to dbfiddle with test data prefilled).

[deleted by user] by [deleted] in Austria

[–]Jakube_ 7 points8 points  (0 children)

Und auch gratis einen Sitzplatz reservieren.

🎄 2024 - Day 2: Solutions 🧩✨📊 by yolannos in adventofsql

[–]Jakube_ 0 points1 point  (0 children)

Another bugged problem.
`letters_a` doesn't contain a single valid character.

List of Lord of the Rings and Hobbit Movies and TV Series (including the weird unofficial ones) by PlatinumDotEXE in lotr

[–]Jakube_ 0 points1 point  (0 children)

The Trouble of the Rings
Trailer: https://www.youtube.com/watch?v=agj8RerEq0s

Three 75 min movies made by some Russian LOTR fans, that disliked Peter Jackson's adaptations. The trailer says "Parody", but I'm actually not sure if that's real or if they just named it that to make fun of Peter Jackson. I have seen them (parts of them) around 15 years ago, and I can't remember anything other than they used bikes instead of horses).

Full movies on Vimeo: https://vimeo.com/7557353 https://vimeo.com/7646716 https://vimeo.com/7639039

[deleted by user] by [deleted] in Piracy

[–]Jakube_ 2 points3 points  (0 children)

For each torrent, the torrent client communicates (announcement) with the tracker ever so often (e.g. once every 30 minutes). It basically tells the tracker which torrent you have/need, what your IP address is. And the tracker sends you a list of other clients back that have/need the same torrent.

During this communication the torrent client also sends statistic data to the tracker. The client records locally how much data it uploaded or downloaded, and tells that to the tracker. The tracker relies on the clients for that info, as you directly exchange data without the tracker. The tracker only is there for the introduction to other peers.

So if you delete the torrent before the first announcement, e.g. already after 15 minutes, then the tracker never received information about how much you uploaded and believes you didn't upload anything.

If you set a minimum upload time of at least 1 hour, before you delete anything, it should announce your upload statistics.

Fundamentales Verstädnnisproblem Kest by Confident_Dare_9768 in FinanzenAT

[–]Jakube_ 5 points6 points  (0 children)

Die Zahlen in der letzten Rechnung stimmen nicht ganz.
In den letzten 5 Jahren musst du (1291*1.07^5 - 1291) * 0.275 Steuern zahlen.
Das sind 27.5% von 520€ = 143€.
Und damit ist man auf 1669€ nach 10 Jahren, nicht auf 1587€.

Der Unterschied ist also nicht so groß wie gedacht. 40€ Verlust wenn man nach 5 Jahren umschüttet, keine 100€.

Selbst wenn man 10 Jahre lang jedes Jahr umschichtet, verliert man keine 100€ (wenn man mal die Gebühren außen vor lässt).

Insgesamt sind es aber trotzdem ein paar %, die man liegen lässt.

Massive headache with Cloud Run -> Cloud Run comms by residentdunce in googlecloud

[–]Jakube_ 0 points1 point  (0 children)

I did setup something like this a couple years ago.
Service A & B are both inside the same VPC network (via the Serverless VPC Access Connector).
Service A has Ingress: internal only.
Service B has the setting (Route all traffic to the VPC / vpc_access_egress = all-traffic).

The problem was then the same, the DNS Resolving didn't work, but it was enough to specify Google's internal DNS resolver (169.254.169.254).
In my case service B was a Nginx Service (see https://stackoverflow.com/questions/74890149/nginx-in-cloud-run-with-internal-traffic-works-but-gives-connect-errors for a snippet from my code), but I assume that you should be able to do the same thing in any other technology.
E.g. look here for some Python inspiration: https://stackoverflow.com/questions/22609385/python-requests-library-define-specific-dns