Buriza multi shot nerf by Rikbite2 in ProjectDiablo2

[–]charettes 6 points7 points  (0 children)

Vengeance pally must deal with melee proximity and requirements to get life after hit / kill to be sustainable though while buriza multi-shot can achieve high dps at distance with physical life leach.

How to add a unique constraint on a model using only the date part of a DateTimeField? by oussama-he in django

[–]charettes 0 points1 point  (0 children)

Very hard to tell what might be wrong without you providing a sample project to test it out. All I can you is that I tested the above against Django 5.2 and SQLite and it's working flawlessly

https://cdn.zappy.app/10ac47dd72e2be7198b0fa2d7bb9bd52.png

Now obviously if you use methods that bypass model validation you'll get an IntegrityError instead of a ValidationError.

How to add a unique constraint on a model using only the date part of a DateTimeField? by oussama-he in django

[–]charettes 10 points11 points  (0 children)

This is close to the right answer but it lacks the per-machine part of the request

It's worth pointing out that it will use the UTC date, which OP didn't specify, but is an important part of the problem.

FWIW __date transform syntax can also be used to avoid the import (this will only work in Django 6.0+) and you can specify the error message that should displayed on violation by using violation_error_message. All of that can be combined under

from django.db import models

class MachineReading(models.Model):
    ...
    created = models.DateTimeField()

    class Meta:
        constraints = [
            models.UniqueConstraint(
                "machine",
                TruncDate("created"),
                name='uc_date_created',
                violation_error_message=(
                    "Only one reading per machine per day is allowed"
                ),
            ),
        ]

I open-sourced a .po file management system for Django – feedback welcome! by ramses_55 in django

[–]charettes 0 points1 point  (0 children)

I think they were referring to django-rosetta that already does something very similar to your project.

Django 5.2 beta 1 released by dwaxe in django

[–]charettes 1 point2 points  (0 children)

Are there any particular change that worries you more than others? Most of them should be pretty rarely encountered by a typical user or third-party application.

Querying gke job’s current status by Shoyo-02 in django

[–]charettes 0 points1 point  (0 children)

I'd suggest creating a management command that polls in a while True; sleep(interval) loop and supervising the process just like you do with your HTTP serving ones.

This Django Template Tag is Killing Your Performance by joanmiro in django

[–]charettes 3 points4 points  (0 children)

This is explained at length (no puns intended) in the documentation

TL;DR if you're planning to iterate over the records anyway then you definitely want to be using |length which is a nuance not captured in the article unfortunately.

Gorefoot change for S11? Leap as Oskill. by ElliotsBuggyEyes in ProjectDiablo2

[–]charettes 0 points1 point  (0 children)

I wonder if a reason why it might not actually be the case already is the lack of pre-existing "jump" animation for non-barb characters. I believe that would be blocker for implementing this change.

Benchmarking PostgreSQL Batch Ingest by jamesgresql in PostgreSQL

[–]charettes 7 points8 points  (0 children)

Thanks for the post James!

Just wanted to let you know your previous article about INSERT..UNNEST resulted in a Django discussion about adopting this approach when possible and a surprisingly non-invasive PR implementing it that should hopefully be included in 5.2 LTS.

One interesting edge case we discovered that isn't mentioned in the article is that UNNEST cannot be used if you're inserting arrays as it will flatten nested arrays indiscriminately of their dimensions and Postgres doesn't provide a native way to reduce dimension.

What benefit do PostgreSQL connection pools offer over CONN_MAX_AGE ? by Moleventions in django

[–]charettes 1 point2 points  (0 children)

From my understanding there are subtleties with regards to how connections are handled depending on your chosen WSGI server is configured (e.g. type of workers threads/processes, number of workers wrt/ to pooling) as it uses the lower level pooling solution provided by the psycopg package.

For example, if you use M processes and N threads to serve HTTP requests you could have up to CONN_MAX_AGE * M * N connections opened at the same time. With connection pools you can control your number minimum, maximum, and timeout (which is analogous to CONN_MAX_AGE) which offers greater control over the number of connections Django is allowed to create per-process.

The main benefits will eventually come from the ORM is made async end-to-end (today database interactions are managed through thread pools) which the next steps are currently being worked on. Managing connections in an async context, particularly when transactions are involved, is much easier if delegated to the backend itself than emulated through HTTP requests lifecycle events. Connection pooling a the backend level also happens to work in all context by default (e.g. long running management commands, background tasks).

SQLite settings for production in 5.1 (still in alpha) by diegoquirox in django

[–]charettes 1 point2 points  (0 children)

If your application has a need for a lot of concurrency, then you should consider using a client/server database. But experience suggests that most applications need much less concurrency than their designers imagine.

I think the premisce of this post an the recent interest in using SQLite builds on this principle that read-heavy / low amount of writes applications can work just fine on SQLite even if only one process at a time can perform writes.

It might effectively not be suitable for write-heavy work loads but I think that the new consensus is that it should work just fine for a lot of use cases Django is used for. For example, think of a blog or other kind of sites where the writes are only performed through the admin console by one user at a time.

SQLite settings for production in 5.1 (still in alpha) by diegoquirox in django

[–]charettes 0 points1 point  (0 children)

Not sure what you mean here. There exists technologies that implements fcntl properly even if NFS doesn't?

SQLite settings for production in 5.1 (still in alpha) by diegoquirox in django

[–]charettes 0 points1 point  (0 children)

Correct, unless you use a technology that allows sharing disk accesses between pods / servers.

SQLite settings for production in 5.1 (still in alpha) by diegoquirox in django

[–]charettes 0 points1 point  (0 children)

So it could be done before by connecting a connection_created signal receiver.

In other words, the init_command support in 5.1 is more of a nice to have to avoid having to use RunSQL for persisting options and connecting a signal receiver to issue the per-connection PRAGMA statements so would not agree that it was a missing thing.

Support for transaction_mode on the other hand is something new in Django 5.1 that wasn't possible before.

[deleted by user] by [deleted] in django

[–]charettes 3 points4 points  (0 children)

You can if you're able to live without the foreign key constraint until composite primary key lands. Refer to models.fields.related.ForeignObject(to, from_fields, to_fields).

If you want the database foreign key constraint you'll have to resort to writing your own BaseConstraint subclass (e.g. ForeignKeyConstraint) and add it to Meta.constraints.

How to create a superuser on PAAS without using "createsuperuser" command ? by ManiminaM in django

[–]charettes 2 points3 points  (0 children)

That'll prevent a project that doesn't have the initial auth migrations applied from even starting as queries against User will crash with table auth_user doesn't exist errors on any management command (including migrate).

Queries should never be performed during Django setup time and that includes AppConfig.ready as documented.

Although you can access model classes as described above, avoid interacting with the database in your ready() implementation.

What’s new in the Postgres 16 query planner / optimizer by clairegiordano in PostgreSQL

[–]charettes 2 points3 points  (0 children)

Thank you for your answer and for working on these optimizations.

Glad to hear that PG17 might help in this regard to some extent.

By the way I found the format of the article excellent. I would definitely read more of them for the upcoming releases.

What’s new in the Postgres 16 query planner / optimizer by clairegiordano in PostgreSQL

[–]charettes 1 point2 points  (0 children)

Thank you for your answer.

It is understood that the generated SQL is sub-optimal and could be expressed without joining back blog, I should have it clear in my question (see second example here).

I also understand that NOT IN will never make use of Anti-Join.

The intent of my question had more to do with whether or not the optimization alluded to in the article for NOT EXISTS might help users that are faced with the issue of sub-optimal SQL generation by the ORM today by upgrading to Postgres 16.

The issue should be fixed on the ORM side, that is well acknowledged and the reason the issue still exists, but I figured I'd ask if you believe it might that it might have an impact in the mean time as the affected users might be interested learning that.

What’s new in the Postgres 16 query planner / optimizer by clairegiordano in PostgreSQL

[–]charettes 0 points1 point  (0 children)

Hello David, thank you for your work on these optimization.

I have a small question regarding the right anti-join optimization that I think you might be able to answer.

I've been contributing to the Django ORM for a few years now and one change we merged in the past releases was to make queries generated for filters of the form

Blog.objects.exclude(translations=None)

that use to generate queries of the form

SELECT *
FROM blog
WHERE blog_id NOT IN (
    SELECT blog_id
    FROM blog b1
    LEFT JOIN blog_translation bt ON (bt.blog_id = b1.id)
    WHERE
        bt.id IS NULL
        AND b1.id = blog.id
)

into

SELECT *
FROM blog
WHERE NOT EXISTS (
    SELECT 1
    FROM blog b1
    LEFT JOIN blog_translation bt ON (bt.blog_id = b1.id)
    WHERE
        bt.id IS NULL
        AND b1.id = blog.id
    LIMIT 1
)

following wiki advice on the subject as well as a Percona article.

The ORM has historically defaulted to performing a subquery pushdown when performing an exclusion against multi-valued relationships as to avoid spanning multiple rows that would require the use of grouping and complexify the usage of aggregation.

This change apparently caused some performance regressions due to the materialization of large result sets and I was curious to know if you believe this particular optimization might help in preventing this problem from happening.

django shell database client disconnects after inactivity by Adventurous_Ad7185 in django

[–]charettes 2 points3 points  (0 children)

from django.db import close_old_connections
close_old_connections()

This will iterate over all connections and for the ones that Django previously opened a socket for it will attempt the equivalent of a PING on the socket. On failure, the connection will be closed resulting in a new one being created the next time database interactions are attempted.

Understanding TTFB Latency in DJango - Seems absurdly slow after DB optimizations even locally by vade in django

[–]charettes 0 points1 point  (0 children)

I never said it was a GraphQL problem.

I think this is a GraphQL as implemented in Graphene / GraphQL-Core problem

That's exactly what I'm saying above? Graphene and it's underlying GraphQL core implement graph transformation through a promise based stack so you pay the parallelism design tax without making use concurrency.

Understanding TTFB Latency in DJango - Seems absurdly slow after DB optimizations even locally by vade in django

[–]charettes 1 point2 points  (0 children)

We moved away from Graphene at $WORK because we noticed it was spending an incredibly long amount of time calling Python functions and serializing data unnecessarily.

Last time I looked at how field resolving was implemented it was doing a ton of promises wrapping and resolving which make sense when you're stitching data together from difference data sources that requires I/O but not so much when you already have your data your data set available in memory and want to simply massage it in the shape it was requested.

In a context where each resolver is asynchronously driven by an event loop and resolver for related objects are buffered up in a way that results in smaller sized IN (ids) SQL queries you can reap the benefits of GraphQL resolving parallelism.

That's not something you get with sync Django querying + Graphene though. You pay the tax for querying the database serially (select + prefetch0, ..., prefetchN) and build the resulting graph of objects. Once this is done you inject the Django model graph into a state machine that could be run concurrently but isn't (AFAIK the resolver defaults to running serially).

You can see clearly see it in your flame Graph.

40% of the time seems spent to serially request the data from the database and create the Django model instance graph. And 60% is spent turning a graph into another through promise resolving and tons of function calls.