scrapy 模擬登入代碼

class TestSmtSpider(scrapy.Spider):php

    name = "test_smt"web

    #allowed_domains = ["http://www.maiziedu.com/"]api

    start_urls = ('http://www.maiziedu.com/',session

    )dom


    def parse(self, response):scrapy

        # scrapy.FormRequest()ide

        return scrapy.FormRequest(url='http://www.maiziedu.com/user/login/',this

                                                formdata={'account_l':'******',url

                                                      'password_l':'****'},orm

                                                callback=self.after_l)

    def after_l(self,response):

        print(response.body)

        return scrapy.Request(url='http://www.maiziedu.com/user/center/?source=login',

                              callback=self.after_lo)



    def after_lo(self,response):

        rel = r'class="dt-username"([\s\S]*?)v5-icon v5-icon-rd'

        my_name = re_fuc(response.body,rel)[0]

        print('*******************', my_name)



另一種:

It is usual for web sites to provide pre-populated form fields through <input type="hidden"> elements, such

as session related data or authentication tokens (for login pages). When scraping, you’ll want these fields to be

automatically pre-populated and only override a couple of them, such as the user name and password. You can use the

FormRequest.from_response() method for this job. Here’s an example spider which uses it:

import scrapy

class LoginSpider(scrapy.Spider):

name = 'example.com'

start_urls = ['http://www.example.com/users/login.php']

def parse(self, response):

return scrapy.FormRequest.from_response(

response,

formdata={'username': 'john', 'password': 'secret'},

callback=self.after_login

)

def after_login(self, response):

# check login succeed before going on

if "authentication failed" in response.body:

self.logger.error("Login failed")

return

相關文章
相關標籤/搜索