Web Scraping with Python: A Tutorial on the BeautifulSoup HTML Parser

Introduction

Web scraping is a technique employed to extract a large amount of data from websites and format it for use in a variety of applications. Web scraping allows us to automatically extract data and present it in a usable configuration, or process and store the data elsewhere. The data collected can also be part of a pipeline where it is treated as an input for other programs.

In the past, extracting information from a website meant copying the text available on a web page manually. This method is highly inefficient and not scalable. These days, there are some nifty packages in Python that will help us automate the process! In this post, I’ll walk through some use cases for web scraping, highlight the most popular open source packages, and walk through an example project to scrape publicly available data on Github.

Web Scraping Use Cases

Web scraping is a powerful data collection tool when used efficiently. Some examples of areas where web scraping is employed are:

  • Search: Search engines use web scraping to index websites for them to appear in search results. The better the scraping techniques, the more accurate the results.
  • Trends: In communication and media, web scraping can be used to track the latest trends and stories since there is not enough manpower to cover every new story or trend. With web scraping, you can achieve more in this field.
  • Branding: Web scraping also allows communications and marketing teams scrape information about their brand’s online presence. By scraping for reviews about your brand, you can be aware of what people think or feel about your company and tailor outreach and engagement strategies around that information.
  • Machine Learning: Web scraping is extremely useful in mining data for building and training machine learning models.
  • Finance: It can be useful to scrape data that might affect movements in the stock market. While some online aggregators exist, building your own collection pool allows you to manage latency and ensure data is being correctly categorized or prioritized.

Tools & Libraries

There are several popular online libraries that provide programmers with the tools to quickly ramp up their own scraper. Some of my favorites include:

  • Requests – a library to send HTTP requests, which is very popular and easier to use compared to the standard library’s urllib.
  • BeautifulSoup – a parsing library that uses different parsers to extract data from HTML and XML documents. It has the ability to navigate a parsed document and extract what is required.
  • Scrapy – a Python framework that was originally designed for web scraping but is increasingly employed to extract data using APIs or as a general purpose web crawler. It can also be used to handle output pipelines. With scrapy, you can create a project with multiple scrapers. It also has a shell mode where you can experiment on its capabilities.
  • lxml – provides python bindings to a fast html and xml processing library called libxml. Can be used discretely to parse sites but requires more code to work correctly compared to BeautifulSoup. Used internally by the BeautifulSoup parser.
  • Selenium – a browser automation framework. Useful when parsing data from dynamically changing web pages when the browser needs to be imitated.
Library Learning curve Can fetch Can process Can run JS Performance
requests easy yes no no fast
BeautifulSoup4 easy no yes no normal
lxml medium no yes no fast
Selenium medium yes yes yes slow
Scrapy hard yes yes no normal

Using the Beautifulsoup HTML Parser on Github

We’re going to use the BeautifulSoup library to build a simple web scraper for Github. I chose BeautifulSoup because it is a simple library for extracting data from HTML and XML files with a gentle learning curve and relatively little effort required. It provides handy functionality to traverse the DOM tree in an HTML file with helper functions.

Requirements

In this guide, I will expect that you have a Unix or Windows-based machine. You are also going to need to have the following installed on your machine:

  • Python 3
  • BeautifulSoup4 Library

Profiling the Webpage

We first need to decide what information we want to gather. In this case, I’m hoping to fetch a list of a user’s repositories along with their titles, descriptions, and primary programming language. To do this, we will scrape Github to get the details of a user’s repositories. While this information is available through Github’s API, scraping the data ourselves will give us more control over the format and thoroughness of the end data.

Once that’s done, we’ll profile the website to see where our target information is located and create a plan to retrieve it.

To profile the website, visit the webpage and inspect it to get the layout of the elements.

Let’s visit Guido van Rossum’s Github profile as an example and view his repositories:

Back to Top