-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scrapy source using scrapy #332
Conversation
860d39f
to
4191967
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, now with the closing queue implemented, you can simplify the code even simpler. please read the review. when it is done you'll be able to get rid of DltSpider and people will be able to use a standard spider here.
e98fde3
to
68f8a24
Compare
03202dc
to
0166b93
Compare
335d655
to
3b440d0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, but still we need to simplify it. I'll ping you for the code walkthrough
Close queue Add requirements.txt Format code Cleanup code Fix linting issues Remove redundant config option Add revised README Add more docstring and cleanup sources Make api simpler Cleanup
Add logging and batch size configuration Cleanup code
Add tests Adjust assert Update README Add missing diagram file Update README Close queue when exiting Check if queue close is called Log number of batches Fix linting issues Fix linting issues Mark scrapy source Fix linting issue Format code Yield!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All looks good now but tests somehow do not end. Most probably a thread still executes that is not a daemon thread.
It all works each time when I run tests locally, also when I mix them other tests :/
My take:
- try to stop reactor after the test
- if it does not work then we skip all reactor tests
this is really good job! just the way scrapy works is a little bit obnoxious
@rudolfix I decided to mark to skip the test since we have other more pressing issues and left the todo item to do 10% research on this. Tried to reproduce this locally by
|
Created a bug ticket: #394 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Close queue Add requirements.txt Remove redundant config option Add revised README Make api simpler * Add batching of results Add logging and batch size configuration * Add pytest-mock and scrapy Close queue when exiting Check if queue close is called Log number of batches Fix linting issues Fix linting issues Mark scrapy source Fix linting issue Format code Yield! * Adjust tests * Add pytest-twisted * Add twisted to scrapy dependencies * Add twisted to dev dependencies * Add review comments * Add more checks and do not exit when queue is empty * Create QueueClosedError and handle in listener to exit loop * Simplify code * Stop crawling if queue is closed * Fix linting issues * Fix linting issues * Adjust tests and disable telnet server for scrapy * Remove pytest-twisted * Refactor scrapy item pipeline * Eliminate custom spider * Use pytest.mark.forked to run tests for ALL_DESTINATIONS * Add pytest-forked * Update lockfile * Use scrapy signals * Hide batching and retrieving logic inside queue * Add more types * Extend default scrapy settings * Extract pipeline and scrapy runners * Simplify helpers code * Cleanup code * Add start_urls_file configuration option * Sync scrapy log level with dlt log level * Expose simple scraping pipeline runner * Adjust config file * Connect signals in ScrapyRunner.init * Register source and do cleanups * Better scrapy setting passing and minor cleanups * Remove reduntant code comments * Call engine_stopped callback in finally block * Add more docstrings related to runners * Adjust batch size * Fix queue batching bugs * Pass crawler instance to item_scraped callback * Add advanced example to pipeline code * Access settings override for scrapy * Rewrite tests * Small readme update for bing wembaster * Adjust queue read timeout * Extract test utils for scraping source * Add stream generator to queue to handle generator exit exception * Extract singal registering and tearing down as context manager * Adjust and cleanup example pipeline source file * Cleanup scraping helpers * Adjust tests for scraping pipeline * Add callback access to scraping resource * Update readme * Cleanup code * Import ParamSpec from typing extensions * Fix linting issues * Fix linting issues * Set encoding when opening the file with urls * Adjust typing for scraping testing utils * Use proper Union syntax * Adjust mock patch module path for scraping tests * Use latest dlt version * Adjust mock patch module path for scraping tests * Adjust tests and mark ones to skip * Cleanup tests and utils for scraping source * Re-use spy on queue.close calls * Use append write_disposition by default for scraping source * Update test skip reason * Stop crawler manually * Return self from __call__ * Check if crawler.stop is actually called * Check if crawling has already been stopping * Test to verify resource name generation and override * Adjust resource name selection * Add more docstrings and update readme * Update readme * Add scrapy configuration in example pipeline * Shutdown twisted reactor after module tests * Use simple run_pipeline * Close the queue after timeout * Rewrite a comment and use break instead of return in while loop * Update comments * Mock queue with alternative implementation * Adjust mock patch path * Add logging when scrapy stops and re-arrange code actions * Stop crawler in on_engine_stopped * Call on_engine_stopped from on_item_scraped if the queue is closed * Skip test
* handles end_value and row_order in sql_database * drops dlt extract dependencies * test(filesystem): enable testing (#348) * enable testing * bump dlt version * pushes correct gdrive path * fixes formatting, removez az logs --------- Co-authored-by: Marcin Rudolf <[email protected]> * Update pull_request_template.md (#373) * fixes kinesis shard test (#390) * Fix: skip tests for sources without test accounts (#379) * skip facebook tests * skip matomo tests * skip personio tests * replace postgres to duckdb * return kafka, skip strapi * Scrapy source using scrapy (#332) Close queue Add requirements.txt Remove redundant config option Add revised README Make api simpler * Add batching of results Add logging and batch size configuration * Add pytest-mock and scrapy Close queue when exiting Check if queue close is called Log number of batches Fix linting issues Fix linting issues Mark scrapy source Fix linting issue Format code Yield! * Adjust tests * Add pytest-twisted * Add twisted to scrapy dependencies * Add twisted to dev dependencies * Add review comments * Add more checks and do not exit when queue is empty * Create QueueClosedError and handle in listener to exit loop * Simplify code * Stop crawling if queue is closed * Fix linting issues * Fix linting issues * Adjust tests and disable telnet server for scrapy * Remove pytest-twisted * Refactor scrapy item pipeline * Eliminate custom spider * Use pytest.mark.forked to run tests for ALL_DESTINATIONS * Add pytest-forked * Update lockfile * Use scrapy signals * Hide batching and retrieving logic inside queue * Add more types * Extend default scrapy settings * Extract pipeline and scrapy runners * Simplify helpers code * Cleanup code * Add start_urls_file configuration option * Sync scrapy log level with dlt log level * Expose simple scraping pipeline runner * Adjust config file * Connect signals in ScrapyRunner.init * Register source and do cleanups * Better scrapy setting passing and minor cleanups * Remove reduntant code comments * Call engine_stopped callback in finally block * Add more docstrings related to runners * Adjust batch size * Fix queue batching bugs * Pass crawler instance to item_scraped callback * Add advanced example to pipeline code * Access settings override for scrapy * Rewrite tests * Small readme update for bing wembaster * Adjust queue read timeout * Extract test utils for scraping source * Add stream generator to queue to handle generator exit exception * Extract singal registering and tearing down as context manager * Adjust and cleanup example pipeline source file * Cleanup scraping helpers * Adjust tests for scraping pipeline * Add callback access to scraping resource * Update readme * Cleanup code * Import ParamSpec from typing extensions * Fix linting issues * Fix linting issues * Set encoding when opening the file with urls * Adjust typing for scraping testing utils * Use proper Union syntax * Adjust mock patch module path for scraping tests * Use latest dlt version * Adjust mock patch module path for scraping tests * Adjust tests and mark ones to skip * Cleanup tests and utils for scraping source * Re-use spy on queue.close calls * Use append write_disposition by default for scraping source * Update test skip reason * Stop crawler manually * Return self from __call__ * Check if crawler.stop is actually called * Check if crawling has already been stopping * Test to verify resource name generation and override * Adjust resource name selection * Add more docstrings and update readme * Update readme * Add scrapy configuration in example pipeline * Shutdown twisted reactor after module tests * Use simple run_pipeline * Close the queue after timeout * Rewrite a comment and use break instead of return in while loop * Update comments * Mock queue with alternative implementation * Adjust mock patch path * Add logging when scrapy stops and re-arrange code actions * Stop crawler in on_engine_stopped * Call on_engine_stopped from on_item_scraped if the queue is closed * Skip test * rename template buttons (#340) * Rename new-verified-source.md to build-new-verified-source.md * Rename source-request.md to request-new-source.md * Update request-new-source.md * fixes references --------- Co-authored-by: Marcin Rudolf <[email protected]> * bumps dlt to 0.4.6 * fixes linter * fixes the empty primary key disabling incremental dedup * bumps to 0.4.7a0, does not overwrite [] as primary key * bumps dlt to 0.4.7 --------- Co-authored-by: Ilya Gurov <[email protected]> Co-authored-by: Anton Burnashev <[email protected]> Co-authored-by: Alena Astrakhantseva <[email protected]> Co-authored-by: Sultan Iman <[email protected]> Co-authored-by: adrianbr <[email protected]>
Hey,
This PR implements Scrapy source which set's up communication between threads via queue, see the image below
TODO
add_limit(...)
With the feedback from @rudolfix we decided to do the following changes