request 基本知识
requests是爬虫重要的知识
1、headers的传递
import requests query = input("输入一个你喜欢的明星") url = f'https://www.sogou.com/web?query={query}' dic = { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36" } resp = requests.get(url, headers=dic) # 处理一个小小的反爬 print(resp) print(resp.text) # 拿到页面源代码
2、post方式传递参数使用data=
import requests url = "https://fanyi.baidu.com/sug" s = input("请输入你要翻译的英文单词") dat = { "kw": s } # 发送post请求, 发送的数据必须放在字典中, 通过data参数进行传递 resp = requests.post(url, data=dat) print(resp.json()) # 将服务器返回的内容直接处理成json() => dict
3、get方式传递参数使用params=
import requests url = "https://movie.douban.com/j/chart/top_list" # 重新封装参数 param = { "type": "24", "interval_id": "100:90", "action": "", "start": 0, "limit": 20, } headers = { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36" } resp = requests.get(url=url, params=param, headers=headers) print(resp.json())
4、requests之后使用完毕要关掉获取结果 否则容易卡死
import requests url = "https://movie.douban.com/j/chart/top_list" # 重新封装参数 param = { "type": "24", "interval_id": "100:90", "action": "", "start": 1, "limit": 20, } headers = { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36" } resp = requests.get(url=url, params=param, headers=headers) print(resp.json()) resp.close() # 关掉resp
非特殊说明,本文版权归原作者所有,转载请注明出处
评论列表
发表评论