主頁 > 知識(shí)庫 > 詳解Python之Scrapy爬蟲教程N(yùn)BA球員數(shù)據(jù)存放到Mysql數(shù)據(jù)庫

詳解Python之Scrapy爬蟲教程N(yùn)BA球員數(shù)據(jù)存放到Mysql數(shù)據(jù)庫

熱門標(biāo)簽:惠州龍門400電話要怎么申請(qǐng) 熱門電銷機(jī)器人 智能機(jī)器人電銷神器 河南虛擬外呼系統(tǒng)公司 okcc外呼系統(tǒng)怎么調(diào)速度 電話機(jī)器人哪里有賣 外呼電信系統(tǒng) 萬利達(dá)百貨商場(chǎng)地圖標(biāo)注 上海企業(yè)外呼系統(tǒng)

獲取要爬取的URL



爬蟲前期工作

用Pycharm打開項(xiàng)目開始寫爬蟲文件

字段文件items

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class NbaprojectItem(scrapy.Item):
  # define the fields for your item here like:
  # name = scrapy.Field()
  # pass
  # 創(chuàng)建字段的固定格式-->scrapy.Field()
  # 英文名
  engName = scrapy.Field()
  # 中文名
  chName = scrapy.Field()
  # 身高
  height = scrapy.Field()
  # 體重
  weight = scrapy.Field()
  # 國家英文名
  contryEn = scrapy.Field()
  # 國家中文名
  contryCh = scrapy.Field()
  # NBA球齡
  experience = scrapy.Field()
  # 球衣號(hào)碼
  jerseyNo = scrapy.Field()
  # 入選年
  draftYear = scrapy.Field()
  # 隊(duì)伍英文名
  engTeam = scrapy.Field()
  # 隊(duì)伍中文名
  chTeam = scrapy.Field()
  # 位置
  position = scrapy.Field()
  # 東南部
  displayConference = scrapy.Field()
  # 分區(qū)
  division = scrapy.Field()

爬蟲文件

import scrapy
import json
from nbaProject.items import NbaprojectItem

class NbaspiderSpider(scrapy.Spider):
  name = 'nbaSpider'
  allowed_domains = ['nba.com']
  # 第一次爬取的網(wǎng)址,可以寫多個(gè)網(wǎng)址
  # start_urls = ['http://nba.com/']
  start_urls = ['https://china.nba.com/static/data/league/playerlist.json']
  # 處理網(wǎng)址的response
  def parse(self, response):
    # 因?yàn)樵L問的網(wǎng)站返回的是json格式,首先用第三方包處理json數(shù)據(jù)
    data = json.loads(response.text)['payload']['players']
    # 以下列表用來存放不同的字段
    # 英文名
    engName = []
    # 中文名
    chName = []
    # 身高
    height = []
    # 體重
    weight = []
    # 國家英文名
    contryEn = []
    # 國家中文名
    contryCh = []
    # NBA球齡
    experience = []
    # 球衣號(hào)碼
    jerseyNo = []
    # 入選年
    draftYear = []
    # 隊(duì)伍英文名
    engTeam = []
    # 隊(duì)伍中文名
    chTeam = []
    # 位置
    position = []
    # 東南部
    displayConference = []
    # 分區(qū)
    division = []
    # 計(jì)數(shù)
    count = 1
    for i in data:
      # 英文名
      engName.append(str(i['playerProfile']['firstNameEn'] + i['playerProfile']['lastNameEn']))
      # 中文名
      chName.append(str(i['playerProfile']['firstName'] + i['playerProfile']['lastName']))
      # 國家英文名
      contryEn.append(str(i['playerProfile']['countryEn']))
      # 國家中文
      contryCh.append(str(i['playerProfile']['country']))
      # 身高
      height.append(str(i['playerProfile']['height']))
      # 體重
      weight.append(str(i['playerProfile']['weight']))
      # NBA球齡
      experience.append(str(i['playerProfile']['experience']))
      # 球衣號(hào)碼
      jerseyNo.append(str(i['playerProfile']['jerseyNo']))
      # 入選年
      draftYear.append(str(i['playerProfile']['draftYear']))
      # 隊(duì)伍英文名
      engTeam.append(str(i['teamProfile']['code']))
      # 隊(duì)伍中文名
      chTeam.append(str(i['teamProfile']['displayAbbr']))
      # 位置
      position.append(str(i['playerProfile']['position']))
      # 東南部
      displayConference.append(str(i['teamProfile']['displayConference']))
      # 分區(qū)
      division.append(str(i['teamProfile']['division']))

      # 創(chuàng)建item字段對(duì)象,用來存儲(chǔ)信息 這里的item就是對(duì)應(yīng)上面導(dǎo)的NbaprojectItem
      item = NbaprojectItem()
      item['engName'] = str(i['playerProfile']['firstNameEn'] + i['playerProfile']['lastNameEn'])
      item['chName'] = str(i['playerProfile']['firstName'] + i['playerProfile']['lastName'])
      item['contryEn'] = str(i['playerProfile']['countryEn'])
      item['contryCh'] = str(i['playerProfile']['country'])
      item['height'] = str(i['playerProfile']['height'])
      item['weight'] = str(i['playerProfile']['weight'])
      item['experience'] = str(i['playerProfile']['experience'])
      item['jerseyNo'] = str(i['playerProfile']['jerseyNo'])
      item['draftYear'] = str(i['playerProfile']['draftYear'])
      item['engTeam'] = str(i['teamProfile']['code'])
      item['chTeam'] = str(i['teamProfile']['displayAbbr'])
      item['position'] = str(i['playerProfile']['position'])
      item['displayConference'] = str(i['teamProfile']['displayConference'])
      item['division'] = str(i['teamProfile']['division'])
      # 打印爬取信息
      print("傳輸了",count,"條字段")
      count += 1
      # 將字段交回給引擎 -> 管道文件
      yield item

配置文件->開啟管道文件

# Scrapy settings for nbaProject project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#   https://docs.scrapy.org/en/latest/topics/settings.html
#   https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#   https://docs.scrapy.org/en/latest/topics/spider-middleware.html
# ----------不做修改部分---------
BOT_NAME = 'nbaProject'

SPIDER_MODULES = ['nbaProject.spiders']
NEWSPIDER_MODULE = 'nbaProject.spiders'
# ----------不做修改部分---------

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'nbaProject (+http://www.yourdomain.com)'

# Obey robots.txt rules
# ----------修改部分(可以自行查這是啥東西)---------
# ROBOTSTXT_OBEY = True
# ----------修改部分---------

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#  'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#  'nbaProject.middlewares.NbaprojectSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#  'nbaProject.middlewares.NbaprojectDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#  'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# 開啟管道文件
# ----------修改部分---------
ITEM_PIPELINES = {
  'nbaProject.pipelines.NbaprojectPipeline': 300,
}
# ----------修改部分---------
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

管道文件 -> 將字段寫進(jìn)mysql

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
from itemadapter import ItemAdapter

import pymysql
class NbaprojectPipeline:
	# 初始化函數(shù)
  def __init__(self):
    # 連接數(shù)據(jù)庫 注意修改數(shù)據(jù)庫信息
    self.connect = pymysql.connect(host='域名', user='用戶名', passwd='密碼',
                    db='數(shù)據(jù)庫', port=端口號(hào)) 
    # 獲取游標(biāo)
    self.cursor = self.connect.cursor()
    # 創(chuàng)建一個(gè)表用于存放item字段的數(shù)據(jù)
    createTableSql = """
              create table if not exists `nbaPlayer`(
              playerId INT UNSIGNED AUTO_INCREMENT,
              engName varchar(80),
              chName varchar(20),
              height varchar(20),
              weight varchar(20),
              contryEn varchar(50),
              contryCh varchar(20),
              experience int,
              jerseyNo int,
              draftYear int,
              engTeam varchar(50),
              chTeam varchar(50),
              position varchar(50),
              displayConference varchar(50),
              division varchar(50),
              primary key(playerId)
              )charset=utf8;
              """
    # 執(zhí)行sql語句
    self.cursor.execute(createTableSql)
    self.connect.commit()
    print("完成了創(chuàng)建表的工作")
	#每次yield回來的字段會(huì)在這里做處理
  def process_item(self, item, spider):
  	# 打印item增加觀賞性
  	print(item)
    # sql語句
    insert_sql = """
    insert into nbaPlayer(
    playerId, engName, 
    chName,height,
    weight,contryEn,
    contryCh,experience,
    jerseyNo,draftYear
    ,engTeam,chTeam,
    position,displayConference,
    division
    ) VALUES (null,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)
    """
    # 執(zhí)行插入數(shù)據(jù)到數(shù)據(jù)庫操作
    # 參數(shù)(sql語句,用item字段里的內(nèi)容替換sql語句的占位符)
    self.cursor.execute(insert_sql, (item['engName'], item['chName'], item['height'], item['weight']
                     , item['contryEn'], item['contryCh'], item['experience'], item['jerseyNo'],
                     item['draftYear'], item['engTeam'], item['chTeam'], item['position'],
                     item['displayConference'], item['division']))
    # 提交,不進(jìn)行提交無法保存到數(shù)據(jù)庫
    self.connect.commit()
    print("數(shù)據(jù)提交成功!")

啟動(dòng)爬蟲

屏幕上滾動(dòng)的數(shù)據(jù)


去數(shù)據(jù)庫查看數(shù)據(jù)


簡簡單單就把球員數(shù)據(jù)爬回來啦~

到此這篇關(guān)于詳解Python之Scrapy爬蟲教程N(yùn)BA球員數(shù)據(jù)存放到Mysql數(shù)據(jù)庫的文章就介紹到這了,更多相關(guān)Scrapy爬蟲員數(shù)據(jù)存放到Mysql內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

您可能感興趣的文章:
  • Python基于httpx模塊實(shí)現(xiàn)發(fā)送請(qǐng)求
  • Python爬蟲爬取愛奇藝電影片庫首頁的實(shí)例代碼
  • Python爬蟲之必備chardet庫
  • Python爬蟲進(jìn)階之Beautiful Soup庫詳解
  • Python爬蟲之爬取某文庫文檔數(shù)據(jù)
  • Python爬蟲爬取全球疫情數(shù)據(jù)并存儲(chǔ)到mysql數(shù)據(jù)庫的步驟
  • 小眾實(shí)用的Python 爬蟲庫RoboBrowser
  • python爬蟲利器之requests庫的用法(超全面的爬取網(wǎng)頁案例)
  • python爬蟲開發(fā)之使用Python爬蟲庫requests多線程抓取貓眼電影TOP100實(shí)例
  • python爬蟲開發(fā)之使用python爬蟲庫requests,urllib與今日頭條搜索功能爬取搜索內(nèi)容實(shí)例
  • python爬蟲請(qǐng)求庫httpx和parsel解析庫的使用測(cè)評(píng)

標(biāo)簽:淮安 綏化 周口 合肥 秦皇島 周口 百色 綿陽

巨人網(wǎng)絡(luò)通訊聲明:本文標(biāo)題《詳解Python之Scrapy爬蟲教程N(yùn)BA球員數(shù)據(jù)存放到Mysql數(shù)據(jù)庫》,本文關(guān)鍵詞  詳解,Python,之,Scrapy,爬蟲,;如發(fā)現(xiàn)本文內(nèi)容存在版權(quán)問題,煩請(qǐng)?zhí)峁┫嚓P(guān)信息告之我們,我們將及時(shí)溝通與處理。本站內(nèi)容系統(tǒng)采集于網(wǎng)絡(luò),涉及言論、版權(quán)與本站無關(guān)。
  • 相關(guān)文章
  • 下面列出與本文章《詳解Python之Scrapy爬蟲教程N(yùn)BA球員數(shù)據(jù)存放到Mysql數(shù)據(jù)庫》相關(guān)的同類信息!
  • 本頁收集關(guān)于詳解Python之Scrapy爬蟲教程N(yùn)BA球員數(shù)據(jù)存放到Mysql數(shù)據(jù)庫的相關(guān)信息資訊供網(wǎng)民參考!
  • 推薦文章