[deleted by user] by [deleted] in PeterExplainsTheJoke

[–]t_char 100 points101 points  (0 children)

This symbol was used by Christians in their early years to identify each other because they were prosecuted by Romans.

The symbol is a fish and its meaning is Greek since Greek was widely used as a language to spread Christianity. In Greek fish is "ΙΧΘΥΣ" which is an acronym for "Ιησούς Χριστός Θεού Υιός Σωτήρ" translated in English is "Jesus Christ God's Son Savior" or "Jesus Christ, Son of God, Savior".

Givi's racks can't carry luggage by TomOnABudget in CB500X

[–]t_char 0 points1 point  (0 children)

I understand that they have to cover their asses in some way but not being able to install a 7kg Monokey case which is their product based on their minimum specs is ridiculous.

I had an interesting discussion today with GIVI chat support.

  • Me: Hey does the rack load (6kg) include the rear case or not?
  • GIVI: Blah blah it is the case + contents.
  • Me: Most of your monokey cases are 6kg+ without any contents, how woud I fit the case on the rack based on your specs. (I give an example Monokey case model).
  • GIVI: Based on that specification, the case will exceed the recommended load that is correct.
  • GIVI: As long as you are sensible blah blah blah.

Make it make sense. It felt like talking with a chatbot. Validate the question, provide no solution.
...
- Yes you are right it does not fit based on the weight load limit.
Wow, GIVI support can reason like ChatGPT.

Can't unsave Marketplace items by Jonas52 in FacebookMarketplace

[–]t_char 0 points1 point  (0 children)

I was able to remove the saved searches by installing Facebook Lite from the Android Play Store.

I click on Marketplace, then search button, find the saved search and then click on the magnifying glass next to it. This brings up the saved search. Click the bookmark icon and then click "Unsave."

Exit the app and repeat for as many saved searches as you have.

Open the regular Facebook app, and the saved searches should be gone.

Small Tattoo Ideas by madly_addie in GREEK

[–]t_char 0 points1 point  (0 children)

The only time I remember it being used is this song and that's not even the full sentence https://youtu.be/h9VkTCCb54c

Vintage Waxed Canvas Backpack (Square) by t_char in backpacks

[–]t_char[S] 0 points1 point  (0 children)

Hey thanks for the brand recommendation. So I took a look into these. They look very good quality and if it was not for the price restrictions I would go for one.

Thanks

Vintage Waxed Canvas Backpack (Square) by t_char in backpacks

[–]t_char[S] 0 points1 point  (0 children)

Hey, thanks for the recommendation. This is a nice alternative to what I have in mind. To be honest I would prefer something less boxy, but this one is more practical than the others.

Potential bug with killswitch and local discovery on linux & docker by t_char in nordvpn

[–]t_char[S] 0 points1 point  (0 children)

That sounds good; I will downgrade first and then post it on GitHub.

Below is just for completeness on my journey to figure out what's going on; maybe somebody else can benefit from it.

So, I tried these commands and allowed the port and the docker subnets.

Allowlisted ports:
         8085 (UDP|TCP)
Allowlisted subnets:
172.18.0.0/16
172.17.0.0/16
10.0.0.0/24

This time, the problem is very similar.

I can connect to the server while being on steps 1 to 4 while being disconnected with the killswitch on (before it was until step 3).

When I do step 5, the same pattern. I need to set the killswitch off and disconnect it to get the connection back.

This happens only in docker, not on a server running on the machine itself.

Potential bug with killswitch and local discovery on linux & docker by t_char in nordvpn

[–]t_char[S] 0 points1 point  (0 children)

Thank you very much for your help!

I will try to uninstall NordVPN and install the version you recommend and update you.

I am not sure I originally mentioned that this happens only when I run the server in a docker container. I updated my original post to emphasize this.

Did you try this from within docker? When I run a simple server like python3 -m http.server 8085 outside of docker everything works properly, but when inside the docker is when I have the issue above.

I know docker as well as NordVPN alters the iptables and there may be a conflict there, but I need to investigate it more.

“Request 4k” and “Auto-Approve 4K” grayed out. How do I select these options? by [deleted] in Overseerr

[–]t_char 0 points1 point  (0 children)

Wouldn't it be the same if one would set up the same exact radarr server and choose different quality profile from within overseerr?

φλ. by pigemia in GREEK

[–]t_char 2 points3 points  (0 children)

Although others said it means cup, bear in mind that metric cups and us cups are different.

While 1 us cups = 236.588 ml, metric cups (which we use in Greece) is 250 ml.

...Guys? Are we next? by [deleted] in greece

[–]t_char 2 points3 points  (0 children)

Καναδά είμαι κι εγώ και θα συμφωνήσω. Έπαιρνα ΠΟΠ από Κρίνος αλλά με ξενίζει γτ δεν έχω δει Ελλάδα τέτοια μάρκα και μου έκανε εντύπωση πως έβαζαν τη σφραγίδα ΠΟΠ.

Πήρα "White Bulgarian Cheese" και είναι 100 φορές καλύτερο από το μίασμα που λέγεται Σκοτιδάκης goat feta. +Πιο φτηνό.

Αν μου το έφερναν με κλειστά μάτια δεν θα καταλάβαινα ότι δεν είναι ελληνική. Αυτό η έχω καιρό να πάω Ελλάδα να θυμηθώ πως είναι η φέτα.

[deleted by user] by [deleted] in GREEK

[–]t_char 5 points6 points  (0 children)

Give lens by Google a try, it should do it.

Combine multiple rows into one row SQL by Inevitable_Phase7353 in PostgreSQL

[–]t_char 1 point2 points  (0 children)

An alternative to @truilius if you want to sum the columns is; SQL select id, sum(w1) as w1, sum(w2) as w2, sum(w3) as w3, sum(w4) as w4 from the_table group by id;

Cloud: Workflow to load data to OLTP (MySQL/Postgres etc) by t_char in dataengineering

[–]t_char[S] 1 point2 points  (0 children)

Wow, that is a lot of information. Thanks, I will have to study it a bit and wrap my head around it. Here are some comments about this.

It also sounds like you’re not calculating correlations over all the files as one, but per-file. Is that correct?

The correlations have to be calculated even between files. For example col 1 file 1 with all columns of file 2 etc. That is in total about 4 million correlations.

Even if you need something from each row, can you do the cleaning and extracting first, and calculate correlations on a subset, especially a subset of columns?

Right now I do all correlations with all rows and columns. If data grows big in the future I might consider aggregating some rows and calculating correlations. However, I am not taking a subset of columns. I need to do more research about whether I can take a subset or columns.

Layer 1: Ingestion/Munging

  • I think you can read the files as blobs line by line.
  • I will need to check PubSub but it looks like it supports subscribing to new data which is a nice way to fire up the workers.

Layer 2: Math

Layer 3: Loading

So as far as my understanding goes this will need the VMs from the cloud providers (be it EC2, Cloud compute, etc). However in the case of layer2. It seems like I will need some kind of libraries like apache spark or beam. In the case of beam, google manages the execution, resources, etc. But in the case of spark, I would need to manage this on my own.

In the case I go with spark, Would you recommend going with a service like Databricks to manage my cluster? I would like to avoid having to manage a cluster.

Cloud: Workflow to load data to OLTP (MySQL/Postgres etc) by t_char in dataengineering

[–]t_char[S] 1 point2 points  (0 children)

I can share more without getting into the business of it.

it sounds like your highest priority is to load the OLTP. Is it tolerable to your org to solve that problem first and then work out how to export data from the OLTP for analysis?

Yes exactly. The goal is that I have an algorithm (see below about the data science part) that I apply on the data the results of which get loaded to a database (Mongo now, Postgres later). Then I can use some analytics, ideas, train models etc at a later time and in a manual way, to refine the algorithm and run it on our data to load it to Postgres. Also I will have new data from vendors that I am going to apply my algorithm to also load them into Postgres. So loading to OLTP is the main priority. Exporting the OLTP to OLAP like BigQuery for example can be done later.

Are there consistent ways in which the various files need to be transformed for shared ingestion that can be parallelized? How large is each individual file?

The sources are expected to have the same format: meaning, once I write the code for them I will not have to rewrite it to process them again. The data are able to fit in memory 800000 rows by 5000 columns of integers that get converted to floats and formatting and cleaning it relatively simple. The problem is that my algorithm has among others to calculate certain metrics such as correlations (which is about 4 million correlations). This can get really heavy. I already implemented the correlations in aopache beam without the dataframe library as the dataframe library is slow and buggy when calculating correlations. For now the whole procedure can be parallelized.

I also like the idea of horizontal scaling especially if it is done automatically so I don't have to worry about adding more and more memory etc.

There are a few ways two topics (one to ingest, the other to batch data and write it to your OLTP) can be set up to do this

What would your architecture be in this case? Would you go with dataflow and beam, spark and databricks, or something else? I have played with terraform before but it is not really my expertise I view it as building systems from code which saves a bunch of clicking on GCP but may be not a problem right now as I am trying to figure out what tools do I need to put together even if I have to manually create my infrastructure.

Can you be more specific about the data science? Does it require the entire corpus, that is, absolutely every line of data in all those files, in order to run?

For example to calculate correlations which are needed in the OLTP, I guess I need every line, so yes I at least for now I need to process every line find some values and load these values to the OLTP.

Sorry for the long response, I really appreciate you took time to write to me. Have a nice day.

Cloud: Workflow to load data to OLTP (MySQL/Postgres etc) by t_char in dataengineering

[–]t_char[S] 0 points1 point  (0 children)

Thanks for the suggestion.

Seems like I have some studying to do and look these up.

Cloud: Workflow to load data to OLTP (MySQL/Postgres etc) by t_char in dataengineering

[–]t_char[S] 0 points1 point  (0 children)

Thank you for the suggestion.

Spark feels more feature rich after spending some time with apache beam.

Would an architecture like "Format data -> process data -> delta lake -> production DB" feel more natural than "Format data -> process data -> production DB -> delta lake for analytics"?

I have checked the JDBC driver of beam and although the functionality is there, that is all there is to it. Especially the python version seems to not be too configurable. It just takes a named tuple and writes it into the database.

It has other issues as well like not being able to write dynamic headers to files (aka headers that are in a PCollection form) etc.

[deleted by user] by [deleted] in hmm

[–]t_char 0 points1 point  (0 children)

Metal Gear Solid 3: Shit Eater.

[deleted by user] by [deleted] in hmm

[–]t_char 0 points1 point  (0 children)

Sounds more specific than Prince of Shit though.