Skip to main content

Web Scraping and Crawling with Scrapy and MongoDB

Getting Started

There are two possible ways to continue from where we left off.
The first is to extend our existing Spider by extracting every next page link from the response in the parse_item method with an xpath expression and just yield a Request object with a callback to the same parse_item method. This way scrapy will automatically make a new request to the link we specify. You can find more information on this method in the Scrapy documentation.
The other, much simpler option is to utilize a different type of spider - the CrawlSpider(link). It’s an extended version of the basic Spider, designed exactly for our use case.

The CrawlSpider

We’ll be using the same Scrapy project from the last tutorial, so grab the code from the repoif you need it.

Create the Boilerplate

Within the “stack” directory, start by generating the spider boilerplate from the crawltemplate:
$ scrapy genspider stack_crawler stackoverflow.com -t crawl
Created spider 'stack_crawler' using template 'crawl' in module:
  stack.spiders.stack_crawler
The Scrapy project should now look like this:
├── scrapy.cfg
└── stack
    ├── __init__.py
    ├── items.py
    ├── pipelines.py
    ├── settings.py
    └── spiders
        ├── __init__.py
        ├── stack_crawler.py
        └── stack_spider.py
And the stack_crawler.py file should look like this:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule

from stack.items import StackItem


class StackCrawlerSpider(CrawlSpider):
    name = 'stack_crawler'
    allowed_domains = ['stackoverflow.com']
    start_urls = ['http://www.stackoverflow.com/']

    rules = (
        Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        i = StackItem()
        #i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract()
        #i['name'] = response.xpath('//div[@id="name"]').extract()
        #i['description'] = response.xpath('//div[@id="description"]').extract()
        return i
We just need to make a few updates to this boilerplate…

Update the start_urls list

First, add the first page of questions to the start_urls list:
start_urls = [
    'http://stackoverflow.com/questions?pagesize=50&sort=newest'
]

Update the rules list

Next, we need to tell the spider where it can find the next page links by adding a regular expression to the rules attribute:
rules = [
    Rule(LinkExtractor(allow=r'questions\?page=[0-9]&sort=newest'),
         callback='parse_item', follow=True)
]
Scrapy will now automatically request new pages based on those links and pass the response to the parse_item method to extract the questions and titles.
If you’re paying close attention, this regex limits the crawling to the first 9 pages since for this demo we do not want to scrape all 176,234 pages!

Update the parse_item method

Now we just need to write how to parse the pages with xpath, which we already did in the last tutorial - so just copy it over:
def parse_item(self, response):
    questions = response.xpath('//div[@class="summary"]/h3')

    for question in questions:
        item = StackItem()
        item['url'] = question.xpath(
            'a[@class="question-hyperlink"]/@href').extract()[0]
        item['title'] = question.xpath(
            'a[@class="question-hyperlink"]/text()').extract()[0]
        yield item
That’s it for the spider, but do not start it just yet.

Add a Download Delay

We need to be nice to StackOverflow (and any site, for that matter) by setting a download delay in settings.py:
DOWNLOAD_DELAY = 5
This tells Scrapy to wait at least 5 seconds between every new request it makes. You’re essentially rate limiting yourself. If you do not do this, StackOverflow will rate limit you; and if you continue to scrape the site without imposing a rate limit, your IP address could be banned. So, be nice - Treat any site you scrape as if it were your own.
Now there is only one thing left to do - store the data.

MongoDB

Last time we only downloaded 50 questions, but since we are grabbing a lot more data this time, we want to avoid adding duplicate questions to the database. We can do that by using a MongoDB upsert, which means we update the question title if it is already in the database and insert otherwise.
Modify the MongoDBPipeline we defined earlier:
class MongoDBPipeline(object):

    def __init__(self):
        connection = pymongo.MongoClient(
            settings['MONGODB_SERVER'],
            settings['MONGODB_PORT']
        )
        db = connection[settings['MONGODB_DB']]
        self.collection = db[settings['MONGODB_COLLECTION']]

    def process_item(self, item, spider):
        for data in item:
            if not data:
                raise DropItem("Missing data!")
        self.collection.update({'url': item['url']}, dict(item), upsert=True)
        log.msg("Question added to MongoDB database!",
                level=log.DEBUG, spider=spider)
        return item
For simplicity, we did not optimize the query and did not deal with indexes since this is not a production environment.

Test

Start the spider!
$ scrapy crawl stack_crawler
Now sit back and watch your database fill with data!
$ mongo
MongoDB shell version: 3.0.4
> use stackoverflow
switched to db stackoverflow
> db.questions.count()
447
>

Popular posts from this blog

How to read or extract text data from passport using python utility.

Hi ,  Lets get start with some utility which can be really helpful in extracting the text data from passport documents which can be images, pdf.  So instead of jumping to code directly lets understand the MRZ, & how it works basically. MRZ Parser :                 A machine-readable passport (MRP) is a machine-readable travel document (MRTD) with the data on the identity page encoded in optical character recognition format Most travel passports worldwide are MRPs.  It can have 2 lines or 3 lines of machine-readable data. This method allows to process MRZ written in accordance with ICAO Document 9303 (endorsed by the International Organization for Standardization and the International Electrotechnical Commission as ISO/IEC 7501-1)). Some applications will need to be able to scan such data of someway, so one of the easiest methods is to recognize it from an image file. I 'll show you how to retrieve the MRZ information from a picture of a passport using the PassportE

How to generate class diagrams pictures in a Django/Open-edX project from console

A class diagram in the Unified Modeling Language ( UML ) is a type of static structure diagram that describes the structure of a system by showing the system’s classes, their attributes, operations (or methods), and the relationships among objects. https://github.com/django-extensions/django-extensions Step 1:   Install django extensions Command:  pip install django-extensions Step 2:  Add to installed apps INSTALLED_APPS = ( ... 'django_extensions' , ... ) Step 3:  Install diagrams generators You have to choose between two diagram generators: Graphviz or Dotplus before using the command or you will get: python manage.py graph_models -a -o myapp_models.png Note:  I prefer to use   pydotplus   as it easier to install than Graphviz and its dependencies so we use   pip install pydotplus . Command:  pip install pydotplus Step 4:  Generate diagrams Now we have everything installed and ready to generate diagrams using the comm

How to Remove course from Open-edX

Go to vagrant  => 1. In the edx-platform directory:  - cd /edx/app/edxapp/edx-platform 2. Run the following Django management command:   - sudo -u www-data /edx/bin/python.edxapp /edx/bin/manage.edxapp lms dump_course_ids --settings aws    - sudo -u www-data /edx/bin/python.edxapp /edx/bin/manage.edxapp lms dump_course_ids --settings=devstack 3. Find the course ID which you'd like to delete in the resulting list of course IDs. 4. Copy the course ID into the following command and run it:  - sudo -u www-data /edx/bin/python.edxapp /edx/bin/manage.edxapp cms delete_course <COURSE_ID> --settings aws  -   sudo -u www-data /edx/bin/python.edxapp /edx/bin/manage.edxapp cms delete_course <COURSE_ID> --settings=devstack  - You'll be asked to verify the deletion . To verify the deletion, run the command from step 2 above and ensure that the course ID is not in the list. Help reference : https://openedx.atlassian.net/wiki/spa