I found that celery purge
doesn't work for my more complex celery config. I use multiple named queues for different purposes:
$ sudo rabbitmqctl list_queues -p celery name messages consumers
Listing queues ... # Output sorted, whitespaced for readability
celery 0 2
[email protected] 0 1
[email protected] 0 1
apns 0 1
[email protected] 0 1
analytics 1 1
[email protected] 0 1
bcast.361093f1-de68-46c5-adff-d49ea8f164c0 0 1
bcast.a53632b0-c8b8-46d9-bd59-364afe9998c1 0 1
celeryev.c27b070d-b07e-4e37-9dca-dbb45d03fd54 0 1
celeryev.c66a9bed-84bd-40b0-8fe7-4e4d0c002866 0 1
celeryev.b490f71a-be1a-4cd8-ae17-06a713cc2a99 0 1
celeryev.9d023165-ab4a-42cb-86f8-90294b80bd1e 0 1
The first column is the queue name, the second is the number of messages waiting in the queue, and the third is the number of listeners for that queue. The queues are:
The analytics task is a brute force tasks that worked great on small data sets, but now takes more than 24 hours to process. Occasionally, something will go wrong and it will get stuck waiting on the database. It needs to be re-written, but until then, when it gets stuck I kill the task, empty the queue, and try again. I detect "stuckness" by looking at the message count for the analytics queue, which should be 0 (finished analytics) or 1 (waiting for last night's analytics to finish). 2 or higher is bad, and I get an email.
celery purge
offers to erase tasks from one of the broadcast queues, and I don't see an option to pick a different named queue.
Here's my process:
$ sudo /etc/init.d/celeryd stop # Wait for analytics task to be last one, Ctrl-C
$ ps -ef | grep analytics # Get the PID of the worker, not the root PID reported by celery
$ sudo kill <PID>
$ sudo /etc/init.d/celeryd stop # Confim dead
$ python manage.py celery amqp queue.purge analytics
$ sudo rabbitmqctl list_queues -p celery name messages consumers # Confirm messages is 0
$ sudo /etc/init.d/celeryd start