Best Practice to urllib.request Ignore SSL Verification in Python 3.x – Python Web Crawler Tutorial

By | July 19, 2019

Ignoring SSL verification when crawling a url can allow our python crawler to get the content of pages at most time. In this tutorial, we will introduce a tip to show how to ignore it.


# -*- coding:utf-8 -*-
import urllib.request

Create a url to crawl

url = ''

Create a request to crawl

def getRequest(url, post_data= None):
    req = urllib.request.Request(url, data = post_data)
    req.add_header('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8')
    req.add_header('Accept-Encoding', 'gzip, deflate, br')
    req.add_header('Accept-Language', 'zh-CN,zh;q=0.9')
    req.add_header('Cache-Control', 'max-age=0')
    req.add_header('Referer', '')
    req.add_header('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36')
    return req

A Simple Guide to Use urllib to Crawl Web Page in Python 3 – Python Web Crawler Tutorial

Start to crawl with ssl verification

crawl_url = ''
crawl_req = getRequest(crawl_url)
crawl_response = None
    crawl_response = urllib.request.urlopen(crawl_req, timeout = 30)
except urllib.error.HTTPError as e:
    error_code = e.code
except urllib.error.URLError as ue: # such as timeout

Then you will get result a ssl.CertificateError.

ssl.CertificateError - hostname does not match either of

To fix this error, we can ingore ssl verification when crawling this url.

Crawl page with ingoring ssl verification

    #ignore ssl
    import ssl
    crawl_response = urllib.request.urlopen(crawl_req, timeout = 30, content)

We need edit urllib.request.urlopen() like above.

Then crawl this url again, you will find this error is fixed.