亚洲视频二区_亚洲欧洲日本天天堂在线观看_日韩一区二区在线观看_中文字幕不卡一区

公告:魔扣目錄網(wǎng)為廣大站長(zhǎng)提供免費(fèi)收錄網(wǎng)站服務(wù),提交前請(qǐng)做好本站友鏈:【 網(wǎng)站目錄:http://www.430618.com 】, 免友鏈快審服務(wù)(50元/站),

點(diǎn)擊這里在線咨詢客服
新站提交
  • 網(wǎng)站:51998
  • 待審:31
  • 小程序:12
  • 文章:1030137
  • 會(huì)員:747

Python爬蟲框架scrapy爬取騰訊招聘,月薪上萬不是夢(mèng)呀

 

創(chuàng)建項(xiàng)目

scrapy startproject tencent

編寫items.py

寫class TencentItem

import scrapy

class TencentItem(scrapy.Item):

# define the fields for your item here like:

# 職位名

positionname = scrapy.Field()

# 詳情連接

positionlink = scrapy.Field()

# 職位類別

positionType = scrapy.Field()

# 招聘人數(shù)

peopleNum = scrapy.Field()

# 工作地點(diǎn)

workLocation = scrapy.Field()

# 發(fā)布時(shí)間

publishTime = scrapy.Field()

創(chuàng)建基礎(chǔ)類的爬蟲

scrapy genspider tencentPosition"tencent.com"

tencentPosition.py

# -*- coding: utf-8 -*-
import scrapy
from tencent.items import TencentItem
class TencentpositionSpider(scrapy.Spider):
 name = "tencent"
 allowed_domains = ["tencent.com"]
 url = "http://hr.tencent.com/position.php?&start="
 offset = 0
 start_urls = [url + str(offset)]
 def parse(self, response):
 for each in response.xpath("//tr[@class='even'] | //tr[@class='odd']"):
 # 初始化模型對(duì)象
 item = TencentItem()
 item['positionname'] = each.xpath("./td[1]/a/text()").extract()[0]
 # 詳情連接
 item['positionlink'] = each.xpath("./td[1]/a/@href").extract()[0]
 # 職位類別
 item['positionType'] = each.xpath("./td[2]/text()").extract()[0]
 # 招聘人數(shù)
 item['peopleNum'] = each.xpath("./td[3]/text()").extract()[0]
 # 工作地點(diǎn)
 item['workLocation'] = each.xpath("./td[4]/text()").extract()[0]
 # 發(fā)布時(shí)間
 item['publishTime'] = each.xpath("./td[5]/text()").extract()[0]
 yield item
 if self.offset < 1680:
 self.offset += 10
 # 每次處理完一頁的數(shù)據(jù)之后,重新發(fā)送下一頁頁面請(qǐng)求
 # self.offset自增10,同時(shí)拼接為新的url,并調(diào)用回調(diào)函數(shù)self.parse處理Response
 yield scrapy.Request(self.url + str(self.offset), callback = self.parse)

管道文件

pipelines.py

import json
class TencentPipeline(object):
 def __init__(self):
 self.filename = open("tencent.json", "w")
 def process_item(self, item, spider):
 text = json.dumps(dict(item), ensure_ascii = False) + ",n"
 self.filename.write(text.encode("utf-8"))
 return item
 def close_spider(self, spider):
 self.filename.close()

在settings文件設(shè)置pipelines

ITEM_PIPELINES = {

'tencent.pipelines.TencentPipeline': 300,

}

添加請(qǐng)求報(bào)頭

DEFAULT_REQUEST_HEADERS

settings.py

BOT_NAME = 'tencent'
SPIDER_MODULES = ['tencent.spiders']
NEWSPIDER_MODULE = 'tencent.spiders'
ROBOTSTXT_OBEY = True
DOWNLOAD_DELAY = 2
DEFAULT_REQUEST_HEADERS = {
 "User-Agent" : "Mozilla/5.0 (compatible; MSIE 9.0; windows NT 6.1; Trident/5.0;",
 'Accept': 'text/html,Application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'
}
ITEM_PIPELINES = {
 'tencent.pipelines.TencentPipeline': 300,
}

分享到:
標(biāo)簽:Python scrapy
用戶無頭像

網(wǎng)友整理

注冊(cè)時(shí)間:

網(wǎng)站:5 個(gè)   小程序:0 個(gè)  文章:12 篇

  • 51998

    網(wǎng)站

  • 12

    小程序

  • 1030137

    文章

  • 747

    會(huì)員

趕快注冊(cè)賬號(hào),推廣您的網(wǎng)站吧!
最新入駐小程序

數(shù)獨(dú)大挑戰(zhàn)2018-06-03

數(shù)獨(dú)一種數(shù)學(xué)游戲,玩家需要根據(jù)9

答題星2018-06-03

您可以通過答題星輕松地創(chuàng)建試卷

全階人生考試2018-06-03

各種考試題,題庫(kù),初中,高中,大學(xué)四六

運(yùn)動(dòng)步數(shù)有氧達(dá)人2018-06-03

記錄運(yùn)動(dòng)步數(shù),積累氧氣值。還可偷

每日養(yǎng)生app2018-06-03

每日養(yǎng)生,天天健康

體育訓(xùn)練成績(jī)?cè)u(píng)定2018-06-03

通用課目體育訓(xùn)練成績(jī)?cè)u(píng)定