requests爬取知乎携带cookies仍然被反爬是什么原理?

from tools.cookie_operate import str_to_dict  

from config.index import ZHIHU_COOKIES

import requests

from fake_useragent import UserAgent

headers = {

"User-Agent": UserAgent().random,

"Host": "www.zhihu.com",

"Referer": "https://www.zhihu.com/"

}

cookie_dict = str_to_dict(ZHIHU_COOKIES)

print(cookie_dict)

request = requests.get("https://www.zhihu.com", cookies=cookie_dict, headers=headers, allow_redirects=False)

print(request)

print(request.text)

requests爬取知乎携带cookies仍然被反爬是什么原理?
登录后获取cookies,配置到文件里,感觉已经万无一失了,还是给转到了登录页,会是什么原因?
requests爬取知乎携带cookies仍然被反爬是什么原理?


回答:

纯爬虫只要爬的多了就会遇到反爬限制,如果要爬的东西不多,可以考虑用selenium

以上是 requests爬取知乎携带cookies仍然被反爬是什么原理? 的全部内容, 来源链接: utcz.com/a/163490.html

回到顶部