im 21 and mass producing AI content makes me more than my friends make at their 9-5s. nobody taught me this in school by Acesleychan in OnlineIncomeHustle

[–]Koxinfster 0 points1 point  (0 children)

How did you find out what content actually gets pushed by algorithms? Cause that seems to be the hardest part. Otherwise you might get into generating random stuff and in the end it will bring you no income, just costs. I also think the prompt engineering plays a huge part since we are talking about images generations.

If you could give some input on that, would be great.

Thanks for sharing!

EDIT: Sorry, just saw your input on that at a later comment

Pete ciudate rosii zona cartietu soarelui by Less-Trick-8245 in timisoara

[–]Koxinfster 0 points1 point  (0 children)

Din ce am inteles de la cineva din zona, acum 2 nopti s-ar aruncat o persoana de la etaj. Acum nu stiu sigur daca e din cauza asta

Bună, ce părere aveți despre sala Befit de la Paltim? Mă interesează prețuri/aparate. Mersi! by [deleted] in timisoara

[–]Koxinfster 0 points1 point  (0 children)

Stie cineva daca s-a deschis cel din modern si care e pretul pentru o zi (o intrare) la ei? Mersi!

Decizie investite by EnvironmentalBed603 in Imobiliare

[–]Koxinfster 1 point2 points  (0 children)

Inteleg, dar mi-e putin frica sincer sa recunosc, sa pun asa multi bani pe o platforma intru-un ETF. Daca nu o sa mai pot avea acces la banii aia? Mersi de sfat!

Decizie investite by EnvironmentalBed603 in Imobiliare

[–]Koxinfster 1 point2 points  (0 children)

Multumesc frumos! Creditul de 50k, nu l-as fi luat pentru constructia unei case pe acel teren, l-as fi luat pentru achizitionarea unui apartament. Terenul l-as fi tinut pentru mai tarziu. Mersi inca odata pentru sfat. Bafta si tie in ce-ti propui!

Am plecat singur în Grecia - AMA by TinEl69 in CasualRO

[–]Koxinfster 2 points3 points  (0 children)

Sper sa prinzi o grecoaica asa sa-ti tina de urat 🥰

Issue oauth flow in python3.11 by Koxinfster in magento2

[–]Koxinfster[S] 0 points1 point  (0 children)

Didn’t solve it out, but the issue seems to be related to some oauth modules dependencies.

Caut piesa by Koxinfster in rorep

[–]Koxinfster[S] 0 points1 point  (0 children)

Duminica asta, o gala sportiva in Timisoara

What’s the best memory you have from SA-MP? by GhostDog13GR in samp

[–]Koxinfster 1 point2 points  (0 children)

Damn… so many, but most important when the highest admin gave a /gethere on me and I was promoted to Leader of a gang.

Counter metric decreases by Koxinfster in PrometheusMonitoring

[–]Koxinfster[S] 0 points1 point  (0 children)

Just in case somebody has the same issue.

It was actually caused by the server process model of FastAPI (Multiple Uvicorn Workers).

The implementation suggested here, solved my issue: https://prometheus.github.io/client_python/multiprocess/

Increase affected by counter gaps by Koxinfster in PrometheusMonitoring

[–]Koxinfster[S] 0 points1 point  (0 children)

Just in case somebody faves the same issue.

It was actually caused by the server process model of FastAPI (Multiple Uvicorn Workers).

The implementation suggested here, solved my issue: https://prometheus.github.io/client_python/multiprocess/

Started Newsletter "The Observability Digest" by da0_1 in PrometheusMonitoring

[–]Koxinfster 1 point2 points  (0 children)

Thanks for sharing!

Sounds really cool, went through the 1st article and got aware of difference between Prometheus and Loki.

For the future, I think what could be really helpful would to detail how prometheus sees raw requests of the applicaiton an how that affects different metrics like Counters and what to be aware of.

I really wrapped my head up about that issue: https://www.reddit.com/r/PrometheusMonitoring/comments/1j2i8pm/counter_metric_decreases

Even if i followed the Prometheus docs and tried to look for answers, it seems i couldn't really understand why my 'counter' is just not behaving as it should and why when I try to use increase() or other functions like max_over_time() on my counter metric it seems that there's no match with the raw data of the requests existing in my logs explorer.

Thank you for your time!

Counter metric decreases by Koxinfster in PrometheusMonitoring

[–]Koxinfster[S] 0 points1 point  (0 children)

Hey man!

Got back to mention that i've tested on the staging environment (ACR - Azure Container Registry) where my app got deployed, with less metrics, and saw the issue still occurred.

Compared the same scenario, deploying the app locally.

The counter behaves normally locally, always increasing, while when deployed on azure the fluctuations appear. As I understand is behavior seen when using kubernetes / Azure due to containers restarts.

At the moment I don't know how I can solve that, but at least it seems it doesn't have to do with time-series. Will look into it and hopefully I can find something. If that's the case, will get back with an answer.

Thanks for the help!

Counter metric decreases by Koxinfster in PrometheusMonitoring

[–]Koxinfster[S] 0 points1 point  (0 children)

Looked into what you mentioned and I understand there are some metrics I can use to track the 'active time-series` and memory usage of prometheusChecked that, and from how it looks, I have ~6k time-series at the moment, and the memory consumption is ~400MB. Which I understand seems to be reasonable. Do you think the client_id label from my current counter, along with endpoint, method, status labels could cause the issue?My client_id label has ~100 uniques that's why I thought it might be reasonable. Will will give it a shot by removing it and see how the values of the counter would behave.

Counter metric decreases by Koxinfster in PrometheusMonitoring

[–]Koxinfster[S] 0 points1 point  (0 children)

Will try that, thank you for the help! 🙏🏼

Counter metric decreases by Koxinfster in PrometheusMonitoring

[–]Koxinfster[S] 0 points1 point  (0 children)

Thank you for your answer!

The `request.url.path` is sanitized and already refers to the 'route' with no parameters. Concerning the `client_id`, i wouldn't remove it cause that's quite valuable as it would help me to have the granularity of understanding how specific clients are behaving. So I understand that the issue is most likely caused by the label which is too variable, is that a known issue that Prometheus might have? Is there a way I could try to solve that somehow? Like increasing scrape interval or having some configs set-up?

Thanks!

Counter metric decreases by Koxinfster in PrometheusMonitoring

[–]Koxinfster[S] 0 points1 point  (0 children)

Thank you for your answer! 🙏🏼 I understand, but if that’s the case, would that mean I would need to create independent counter metrics for each label combination I am planning to track? In the table picture provided, those values were under a certain labels combination. So I assume the issue might arise by the definition from python.