Re: [問題] 網路爬蟲 八卦版
※ 引述《l8PeakNeymar (十八尖山內馬爾)》之銘言:
: 這個問題困擾我一段時間
: 因為網路上都是python或java的教學
: 想請問用C# console專案來爬蟲的問題
: 目前只要爬到八卦板或是西斯板之類的
: 像是我要求看這個網頁:
: https://www.ptt.cc/bbs/Gossiping/M.1234567890.A.D55.html
: 回傳卻是這個:
: https://www.ptt.cc/ask/over18
: 在思考要怎麼把自己已滿18歲認證的˙Cookies一起送給伺服器
: 亂試很多class:
: System.Net.Cookie、HttpWebRequest、WebRequest...
: 結果都不行 因為其實我也不懂原理
: 請問有板友可以教學嗎?非常感激!
: -----
: Sent from JPTT on my Xiaomi Redmi Note 4.
2015寫的python 不知道還有沒有用
重點應該在那行payload
import os, sys
import csv
import datetime, time
import requests
from bs4 import BeautifulSoup
import Ptt_FileGet
tStart = time.time()
payload = {'from':'/bbs/Gossiping/index.html','yes':'yes'}
rs = requests.session()
index_Page = rs.post('https://www.ptt.cc/ask/over18', verify=False,
data=payload)
index_Page = rs.get("https://www.ptt.cc/bbs/Gossiping/index.html")
soup_index_Page = BeautifulSoup(index_Page.text,"html.parser") #抓每篇文章的
URL聯結
print("soup_index_Page's Type: ",type(soup_index_Page))
index_tag = soup_index_Page.find_all('a', href=True)
page = index_tag[7].get('href')
page_number_index = page.index('.html')
index_num = int( page[page_number_index-4:page_number_index])#這邊會因為網址長
度而不一樣
day_today = datetime.datetime.now()
day_minus = day_today + datetime.timedelta(days = -1)
day_yest = day_minus.strftime("%m/%d")[1:]
#day_yest = day_today.strftime("%m/%d")[1:]
URL_filename='D:\Ptt_data\Gossiping_'+day_today.strftime("%m%d")+'_URL.csv'
URL_file = open(URL_filename, 'w', newline='')
#print("I Will Create One For You")
URL_w = csv.writer(URL_file)
URL_w.writerow(['author', 'date', 'link']) #創好檔案名稱了
#此段存文章的網頁聯結
data_filename='D:\Ptt_data\Gossiping_'+day_today.strftime("%m%d")+'_data.csv'
data_file = open(data_filename, 'w', newline='')
data_w = csv.writer(data_file)
data_w.writerow([u'作者', u'日期', u'標題', u'價格']) #創好檔案名稱了
#此段存文章的資料
data_file.close()
count = 0
PTT_URL = 'https://www.ptt.cc'
print("yesterday is ",day_yest)
day_test=' '+day_yest #後面的Post_date有多一個空白 這邊是為了簡單處理才這樣做
print(index_num)
while count == 0 :
try:
res_index = requests.get(PTT_URL + '/bbs/' + 'Gossiping' +
'/index' + str(index_num) + '.html',)
soup_index = BeautifulSoup(res_index.text,"html.parser") #抓每篇
文章的URL聯結
main_container_index = soup_index.select('.r-ent')
for link in main_container_index:
try: #下面這一段是怕有人砍文章會造成錯誤 找不到文
章就pass
Post_author = link.select('div.author')[0].text
Post_date = link.select('div.date')[0].text
Post_link = link.find('a')['href'] #這邊是重點 很怪阿
URL_Link = PTT_URL + Post_link
data = [ [Post_author, Post_date,PTT_URL + Post_link]]
URL_w.writerows(data)
if Post_date==day_test:
Ptt_FileGet.data_save(PTT_URL + Post_link,
data_filename)
except:
pass
if Post_date >= day_test:
index_num = index_num - 1
print("================", index_num, "====================")
else:
count = 1
print('The End')
except:
pass
--
※ 發信站: 批踢踢實業坊(ptt.cc), 來自: 1.169.72.64
※ 文章網址: https://www.ptt.cc/bbs/C_Sharp/M.1524667672.A.B80.html
討論串 (同標題文章)