Skip to content

Commit

Permalink
Update docs on accessing callback arguments in errback (scrapy#4634)
Browse files Browse the repository at this point in the history
  • Loading branch information
dsandeep0138 authored Jun 18, 2020
1 parent 3d027fb commit 5d54173
Showing 1 changed file with 29 additions and 0 deletions.
29 changes: 29 additions & 0 deletions docs/topics/request-response.rst
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,10 @@ Request objects
cloned using the ``copy()`` or ``replace()`` methods, and can also be
accessed, in your spider, from the ``response.cb_kwargs`` attribute.

In case of a failure to process the request, this dict can be accessed as
``failure.request.cb_kwargs`` in the request's errback. For more information,
see :ref:`topics-request-response-ref-accessing-callback-arguments-in-errback`.

.. method:: Request.copy()

Return a new Request which is a copy of this Request. See also:
Expand Down Expand Up @@ -312,6 +316,31 @@ errors if needed::
request = failure.request
self.logger.error('TimeoutError on %s', request.url)

.. _topics-request-response-ref-accessing-callback-arguments-in-errback:

Accessing additional data in errback functions
----------------------------------------------

In case of a failure to process the request, you may be interested in
accessing arguments to the callback functions so you can process further
based on the arguments in the errback. The following example shows how to
achieve this by using ``Failure.request.cb_kwargs``::

def parse(self, response):
request = scrapy.Request('http://www.example.com/index.html',
callback=self.parse_page2,
errback=self.errback_page2,
cb_kwargs=dict(main_url=response.url))
yield request

def parse_page2(self, response, main_url):
pass

def errback_page2(self, failure):
yield dict(
main_url=failure.request.cb_kwargs['main_url'],
)

.. _topics-request-meta:

Request.meta special keys
Expand Down

0 comments on commit 5d54173

Please sign in to comment.