如何获得刮擦失败的URL?

我是新手,我知道这是一个了不起的爬虫框架!

在我的项目中,我发送了90,000个请求,但其中一些失败。我将日志级别设置为INFO,我只是可以看到一些统计信息,但没有详细信息。

2012-12-05 21:03:04+0800 [pd_spider] INFO: Dumping spider stats:

{'downloader/exception_count': 1,

'downloader/exception_type_count/twisted.internet.error.ConnectionDone': 1,

'downloader/request_bytes': 46282582,

'downloader/request_count': 92383,

'downloader/request_method_count/GET': 92383,

'downloader/response_bytes': 123766459,

'downloader/response_count': 92382,

'downloader/response_status_count/200': 92382,

'finish_reason': 'finished',

'finish_time': datetime.datetime(2012, 12, 5, 13, 3, 4, 836000),

'item_scraped_count': 46191,

'request_depth_max': 1,

'scheduler/memory_enqueued': 92383,

'start_time': datetime.datetime(2012, 12, 5, 12, 23, 25, 427000)}

有什么方法可以获取更详细的报告吗?例如,显示那些失败的URL。谢谢!

回答:

是的,这是可能的。

  • 下面的代码在failed_urls基本Spider类中添加了一个列表,并在URL的响应状态为404时将URL附加到该列表中(需要扩展此范围以涵盖其他错误状态)。
  • 接下来,我添加了一个将列表连接到单个字符串中的句柄,并在关闭蜘蛛时将其添加到spider的统计信息中。
  • 根据你的评论,可以跟踪扭曲的错误,下面的一些答案提供了有关如何处理特定用例的示例。
  • 该代码已更新为可与Scrapy 1.8一起使用。所有这一切都要感谢Juliano Mendieta,因为我所做的只是添加他建议的编辑并确认spider是否按预期工作。

from scrapy import Spider, signals

class MySpider(Spider):

handle_httpstatus_list = [404]

name = "myspider"

allowed_domains = ["example.com"]

start_urls = [

'http://www.example.com/thisurlexists.html',

'http://www.example.com/thisurldoesnotexist.html',

'http://www.example.com/neitherdoesthisone.html'

]

def __init__(self, *args, **kwargs):

super().__init__(*args, **kwargs)

self.failed_urls = []

@classmethod

def from_crawler(cls, crawler, *args, **kwargs):

spider = super(MySpider, cls).from_crawler(crawler, *args, **kwargs)

crawler.signals.connect(spider.handle_spider_closed, signals.spider_closed)

return spider

def parse(self, response):

if response.status == 404:

self.crawler.stats.inc_value('failed_url_count')

self.failed_urls.append(response.url)

def handle_spider_closed(self, reason):

self.crawler.stats.set_value('failed_urls', ', '.join(self.failed_urls))

def process_exception(self, response, exception, spider):

ex_class = "%s.%s" % (exception.__class__.__module__, exception.__class__.__name__)

self.crawler.stats.inc_value('downloader/exception_count', spider=spider)

self.crawler.stats.inc_value('downloader/exception_type_count/%s' % ex_class, spider=spider)

输出示例(请注意,仅当实际抛出异常时才会显示downloader / exception_count *统计信息-我在关闭无线适配器后尝试运行Spider来模拟它们):

2012-12-10 11:15:26+0000 [myspider] INFO: Dumping Scrapy stats:

{'downloader/exception_count': 15,

'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 15,

'downloader/request_bytes': 717,

'downloader/request_count': 3,

'downloader/request_method_count/GET': 3,

'downloader/response_bytes': 15209,

'downloader/response_count': 3,

'downloader/response_status_count/200': 1,

'downloader/response_status_count/404': 2,

'failed_url_count': 2,

'failed_urls': 'http://www.example.com/thisurldoesnotexist.html, http://www.example.com/neitherdoesthisone.html'

'finish_reason': 'finished',

'finish_time': datetime.datetime(2012, 12, 10, 11, 15, 26, 874000),

'log_count/DEBUG': 9,

'log_count/ERROR': 2,

'log_count/INFO': 4,

'response_received_count': 3,

'scheduler/dequeued': 3,

'scheduler/dequeued/memory': 3,

'scheduler/enqueued': 3,

'scheduler/enqueued/memory': 3,

'spider_exceptions/NameError': 2,

'start_time': datetime.datetime(2012, 12, 10, 11, 15, 26, 560000)}

以上是 如何获得刮擦失败的URL? 的全部内容, 来源链接: utcz.com/qa/412105.html

回到顶部