使Scrapy跟踪链接并收集数据

我试图用Scrapy编写程序以打开链接并从此标签收集数据:<p class="attrgroup"></p>

我设法使Scrapy从给定的URL收集了所有链接,但没有关注它们。任何帮助都非常感谢。

回答:

你需要Request为链接提供实例,分配回调并在回调中提取所需p元素的文本:

# -*- coding: utf-8 -*-

import scrapy

# item class included here

class DmozItem(scrapy.Item):

# define the fields for your item here like:

link = scrapy.Field()

attr = scrapy.Field()

class DmozSpider(scrapy.Spider):

name = "dmoz"

allowed_domains = ["craigslist.org"]

start_urls = [

"http://chicago.craigslist.org/search/emd?"

]

BASE_URL = 'http://chicago.craigslist.org/'

def parse(self, response):

links = response.xpath('//a[@class="hdrlnk"]/@href').extract()

for link in links:

absolute_url = self.BASE_URL + link

yield scrapy.Request(absolute_url, callback=self.parse_attr)

def parse_attr(self, response):

item = DmozItem()

item["link"] = response.url

item["attr"] = "".join(response.xpath("//p[@class='attrgroup']//text()").extract())

return item

以上是 使Scrapy跟踪链接并收集数据 的全部内容, 来源链接: utcz.com/qa/409083.html

回到顶部