The idea that App 2 makes a request back to App 1 is a common architecture to get web services to work together. Usually, this would work like this:
- App 1 makes a request to App 2
- the request contains an URL on App 1 to be called upon completion
- App 2 responds to the requests, and begins background work
- eventually, App 2 requests the URL provided by App 1’s initial request
- this request can either contain necessary status data, or App 2 might have an API from which App 1 can retrieve necessary data
Such a design is e.g. common on many authorization or payment flows on the web, where multiple parties must work together. However, there’s a lot of back-and-forth here, which might be inefficient.
A completely different approach uses message queues.
- App 1 publishes a message on a topic
- this happens asynchronously, so there is no initial response by App 2
- each message has an ID
- App 2 is listening on that topic and will eventually process the message
- App 2 publishes a new message with results
- the results message will have a correlation-ID that matches the ID of the request
- App 1 is listening for results and will eventually receive a results message with the desired correlation ID
Such message queue based architectures are especially interesting in high-throughput scenarios, or in an enterprise context where you want to be able to easily add more listeners for events, instead of HTTP-style 1:1 communication.
Celery is a Python tool that uses such message queues, but simplifies them. With Celery, Celery is the App 2 – you must start a
celery worker server. This lets App 1 easily perform some function call as a background task. This is similar to using async functions or to starting a thread, but the Celery worker doesn’t have to run on the same server.
So if you just want asynchronous background tasks, Celery is a good fit. If you already have an App 2, Celery won’t help. Instead, you should implement one of the HTTP-request or message queue based approaches.