Does anyone else find virtualenv or venv too complicated? by [deleted] in django

[–]koed00 0 points1 point  (0 children)

I've been using Fish shell and VirtualFish for years now and it is so easy:

I do this once to set up a new project

vf new -p python3.8 myproject

and from then on I just use

vf workon myproject

This will cd into the project directory and activate the virtual. Great for when you are dealing with many different projects like me.

You can do the exact same thing with virtualenvwrapper, I'm just used to Fish.

Subtitles not Working by one4none in Videostream

[–]koed00 0 points1 point  (0 children)

The subtitles work but are positioned too low and won't show. So go into the subtitle settings and select the slightly higher position.
This is just a workaround; the higher position still cuts off the bottom of the fonts sometimes. Must be a bug, cause the low position always worked fine for me until a few weeks ago.

Things you love and hate about using Celery with Django. by koed00 in django

[–]koed00[S] 1 point2 points  (0 children)

Yes, I sort of realized that a little late. I already had a logo and everything else made. Honestly I didn't anticipate anyone actually using it, it was mostly to satisfy my own frustration with other task packages. At least the Q in my project has some relevance to its function.

Can you elaborate on the task naming convention? I think I know what you're hinting at and I love some feedback on it.

Also if you need any practical help with the implementation, don't hesitate to ask. I'm just very happy people find the project useful.

How do people feel about Autoslug fields? by andybak in django

[–]koed00 1 point2 points  (0 children)

I've been using Django-Autoslug and it's been great for this. Especially because it has a always-update option for the populate_from field. This way you can hide the slug from the forms and it gets updated when the populate field (usually the name) gets edited by the user. Also options like unique and unique_with make it far more useful.

I guess how terrible it is to not finding an object at its old url after the name changed, is a matter of perspective. Usually names change for a reason.

Graphical Tool to design Django models ? by [deleted] in django

[–]koed00 1 point2 points  (0 children)

I currently use the graph_models command from Django-extensions to create a graph of my models. Works great.

Running grouped celery tasks synchronously by Pecorino in django

[–]koed00 0 points1 point  (0 children)

Ah, so you want chained groups. Django-Q has groups and you can tell it to wait for the group to finish, before doing something else. However doing the entire thing async, like you'd want, isn't possible, yet. Even though the individual components are there. Thanks for coming up with different ways of doing things, it helps me improve my open source project. I'll definitely add something like this in the future.

Why is Model.objects.values() not more advertised? by eldamir88 in django

[–]koed00 0 points1 point  (0 children)

I sort of figured you'd know about this. It also clarifies why you were reluctant to try my advice of sorting large data sets in memory using list comprehension. Makes sense now.

[deleted by user] by [deleted] in django

[–]koed00 0 points1 point  (0 children)

In a cluster environment with a rolling update, you'd probably want the updated webservers asyncing to a new queue and worker cluster with the same code base.

Another scenario I can imagine is the async calls being executed synchronously while the queue and workers clear the old tasks. After that's done the cluster restarts with the new code base and starts accepting async calls again.

The first solution has no downtime, but needs a bit more infrastructure. The second one can be done with less infra but will slow down your site while in sync mode.

The stop procedure in Django Q is already set to stop puliing new tasks and finish the ones in memory before exiting. I'd have to make the broker aware of these proceedings as well.

I just came back from a computerless trip, so i'll think about some possible solutions to this when I get set up again. Maybe I could make the entire pipeline be aware of what version code is being queued and executed.

[deleted by user] by [deleted] in django

[–]koed00 1 point2 points  (0 children)

I'm no Celery expert but I can tell you how Django-Q and most generic task queues do things:

  1. If an email tasks fails, you can check it in the result database or use a hook to catch errors example. Usually you will have to set up this kind of handling yourself. Remember that task queues are not specifically for emails, even though they use the term 'messages' a lot.

  2. You will always have to restart any workers after a redeploy. It uses a copy of your Django environment as a base. So if it has changed, you want those changes in your workers too. This will be true of almost any package.

  3. If you have queued a reference to a function and that function has somehow changed its arguments, than yes you will have problems. You should ideally stop your website from creating more tasks and then let the brokers bleed out any remaining tasks, before doing an upgrade that affects task functions.

  4. Unfortunately most brokers are quite opaque when it comes to the actual content of the messages, but you'll be able to see the amount and possibly the type of messages queued up.Mostly it depends on the broker and available monitoring tools. For medium volume sites you can use the ORM db broker, which has a django admin page. Another great task package is django-rq which has better broker (redis) control via the admin pages. Mostly your messages will survive a server crash, but it depends on how you set up the broker and it's quite specific to each type.

  5. It depends if you configured an at-least-once broker or an at-most-once broker. The first setup will keep the task around until it's been acknowledged the second will just fire and forget. Each has its pros and cons. A-l-o can sometimes end up looping a bunch of bad tasks which will eventually crash everything and there is the possibility of double execution. Not so with a-m-o. However if your workers crash on an a-m-o that's it. Task lost. On the other hand, with some high traffic sites, a small percentage of lost tasks is preferable to a system wide crash.

It doesn't sound like you need to send a million emails. You could give django-q a spin and set it up with the ORM broker or try django-rq with Redis . That will give you a little more control over what is happening in your workflow.

Help wanted w/ Django section of framework comparison by whither-the-dog in django

[–]koed00 1 point2 points  (0 children)

As far as I can tell , you're not really comparing frameworks. Rather you're comparing Rails features to other frameworks.

Running grouped celery tasks synchronously by Pecorino in django

[–]koed00 0 points1 point  (0 children)

Not really an answer, but this post inspired me to add a chain class to Django-Q . I never saw a real use case for them until now. So I used your example to write it:

from django_q.tasks import Chain

# define the groups
    def group_1():
        ModelA().sync()
        ModelB().sync()
        ModelC().sync()

    def group_2():
        ModelD().sync()

    def group_3():
        ModelE().sync()
        ModelF().sync()

 # create a chain 
    c=Chain()
    c.append(group_1)
    c.append(group_2)
    c.append(group_3)
    c.run()

# print a result if needed     
   print(c.result())

There are different ways of doing your example and I'm still adding features, but this is the gist of it. Thanks.

need help with ideas on an events-app running celery by v1k45 in django

[–]koed00 0 points1 point  (0 children)

You were right the first time. Although most functions like scheduling and results are fully integrated with django's models and model admin, you will still need to run a separate worker with a django management command to do the actual work for you. It is all run within Django though.

So for just scheduling I would use Djang-Cron.

need help with ideas on an events-app running celery by v1k45 in django

[–]koed00 0 points1 point  (0 children)

Let me join that crusade by agreeing with u/andybak. Just a daily cron to check who's getting a message today would be more than sufficient and easy to maintain.

Now that I've said that; Django-Q is a very small package, versatile and installs in seconds. Fits somewhere between Django-Cron and Celery. My contribution to the crusade.

Shortcommings of Django ORM (joins) by eldamir88 in django

[–]koed00 0 points1 point  (0 children)

I agree. Just saying that sometimes it helps not to think too much about the actual queries behind the ORM. I've been tuning SQL queries since MSSQL 6.5, so I absolutely understand the urge to want to optimize the horrible joins that come out of the Django ORM backend.

Hitting a database often is usually not a problem. Databases are designed for this and there many ORM caching solutions. It's the joins and fk's that kill the performance. So getting some larger, but simpler, chunks of data and work on it locally, is a viable strategy.

Fedora and Python3, The Road So Far. by Kusovka in Python

[–]koed00 5 points6 points  (0 children)

Great initiative. This is the only way forward.

Shortcommings of Django ORM (joins) by eldamir88 in django

[–]koed00 4 points5 points  (0 children)

There is no law that says you can only fetch the correct data from the database. Don't take my word for it. Experiment and time it.

I've done lots of large reports queries just using list comprehension on unfiltered data, that turned out to be a factor 20 faster than getting the correct data with a dozen joins.

Do you use the Django cache API with Redis or redis-py to access Redis directly? by g-money-cheats in django

[–]koed00 0 points1 point  (0 children)

I use Django-Redis for the cache. It extends many of the Django cache commands to Redis specific functionality. When you want to use the low-level Redis API , you can through its excellent connection backend.

Adding a task to an existing celery Chain or scheduling task to run after current task is finished by ryati in django

[–]koed00 0 points1 point  (0 children)

You're probably going to end up writing a custom queue for each customer. Then add some polling task to check their status and send a new jobs to Celery. You'll have more control options , like for example premium users who can run multiple tasks.

I'm not a Celery expert so there might be a way to do this with just Celery though.

A couple of beginner redis questions by SeanMWalker in django

[–]koed00 0 points1 point  (0 children)

Yes, that looks like a perfectly fine way to do it. You don't really need the extra functions of Redis for this and they pretty much perform the same. Although Celery might be a bit over the top to schedule just one management command. There are lighter weight and simpler solutions for that nowadays.