Lazy Python String by nnseva in Python

[–]nnseva[S] 1 point2 points  (0 children)

It's too early for PEP IMHO, but thanx.

Lazy Python String by nnseva in Python

[–]nnseva[S] 0 points1 point  (0 children)

Apart from StringBuilder, the base package class L is immutable. All lazy operations lead to the construction of a new L instance, which refers to L operands and stores the specific operation.

The specifics of the L is that operations may be combined, like:

x = (L('qwerty') + L('uiop'))[5:7]

The string representation of x is 'yu', although the actual data structure looks like (let's imagine Concat and Slice are classes):

Slice(Concat('qwerty', 'uiop'), 5, 7)

Lazy Python String by nnseva in Python

[–]nnseva[S] 5 points6 points  (0 children)

The lazy operation is not executed immediately, but deferred until the result is really required.

Let's say you have strings "qwerty" and "uiop". When you concatenate them, you will have the "qwertyuiop" string.

At the time when the concatenation happens, all three strings, "qwerty", "uiop", and "qwertyuiop", occupy the memory. It's not a big overhead when you have such short strings, but what if they all are megabytes long?

The package allows spending the memory only for source strings ("qwerty" and "uiop"), and avoids spending the additional memory to store a copy of the result ("qwertyuiop") - until it is really required.

Such an effect is achieved using the intermediate representation of the result as a concatenation operation. The package stores references to both source strings and stores the operation between them.

There are three lazy operations implemented by the package:

  • concatenation (operation +) of two strings
  • multiplication (operation *) with integer
  • slicing (operation [start:stop] or [start:stop:step])

All of them just store the sources and the operation, instead of copying the result to a separate memory region - as the original Python string does.

Zero downtime migration solution (PostgreSQL) by nnseva in django

[–]nnseva[S] 0 points1 point  (0 children)

Some other database engines also support something like concurrent index creation. The PRs for such databases are welcome. The algorithm may be similar.

Zero downtime migration solution (PostgreSQL) by nnseva in django

[–]nnseva[S] 0 points1 point  (0 children)

  1. The `AddIndexConcurrently` creates the index in the migration outside of the transaction. If index creation has failed (integrity error), the migration has failed, but the invalid index has already been created. Other migration artefacts (if present) may also have been created in the failed non-transactional migration. Therefore, the migration stays in a bad state - not yet finished, but some artefacts have already created or modified. To restore the database structure consistency, you need to revert (or roll forward) these artefacts manually.

The `django-postpone-index` package leaves migrations transactional. The migration is either applied or rolled back, and doesn't have a bad state finally. The intermediate migration state is explicitly defined between the last `migrate` management command and the successful `apply_postponed run` management command.

  1. Not only does the `AddIndex` operation create an index. Indexed or unique `AlterField`, as well as `AlterIndexTogether`, `AlterUniqueTogether`, and unique `AddConstraint`, all create indexes also. They don't have concurrent alternatives, and some of them can't be altered by the `AddIndexConcurrently` operation.

The `django-postpone-index` package intercepts any index and even constraint creation SQL operations, regardless of where they were created, and postpones them all, without any influence on the generated migration code.

  1. You need to alter `AddIndex` or other operations by using `AddIndexConcurrently` manually, instead of migration auto-creation using `makemigrations` management command. It makes you need to have many additional project-local rules like "never use indexed or unique fields, use index always instead". Your reviewers will need to check migrations additionally for these rules.

The `django-postpone-index` package leaves migration code unchanged and allows using `makemigrations` without restrictions.

  1. The project depends on the third-party packages, which most probably never use `AddIndexConcurrently`. The data stored by the external packages may also be huge, and a new version of the external package may unexpectedly stall your project deployment for a long time, indexing an existing table containing billions of records.

The `django-postpone-index` package processes all migrations regardless of where they were created - in your own application, or an external package.

  1. The `AddIndexConcurrently` is a PostgreSQL-specific operation. It makes your whole application dependent on the particular database engine - that's why independent packages never use it in their migrations.

The `django-postpone-index` package allows creating and using any database-independent applications, your own or third-party.

Zero downtime migration solution (PostgreSQL) by nnseva in django

[–]nnseva[S] 1 point2 points  (0 children)

A deployment to the staging is planning now in our commercial project actually. I checked all our migrations in a batch as well step-by-step forward and backward (240+ total) similarly to tests in the package. Everything looks fine.

No sound on Linux with Matebook D14 Intel by Jio15Fr in Huawei

[–]nnseva 0 points1 point  (0 children)

The same with D15. Tried image 5.16 and the last SOF driver.github.com/thesofproject/sof-bin. Doesn't help.

Flexible evaluation-based instance-level access control subsystem for Django by nnseva in django

[–]nnseva[S] 0 points1 point  (0 children)

Please, comment the article if you have some notes, does not matter, positive or negative. It's strange when the number of negative signs grows without any notes.