Skip to content

juliandwain/xscrapers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

XSCRAPERS

The XSCRAPERS package provides an OOP interface to some simple webscraping techniques.

A base use case can be to load some pages to Beautifulsoup Elements. This package allows to load the URLs concurrently using multiple threads, which allows to safe an enormous amount of time.

import xscrapers.webscraper as ws

URLS = [
    "https://www.google.com/",
    "https://www.amazon.com/",
    "https://www.youtube.com/",
]
PARSER = "html.parser"
web_scraper = ws.Webscraper(PARSER, verbose=True)
web_scraper.load(URLS)
web_scraper.parse()

Note that herein, the data scraped is stored in the data attribute of the webscraper. The URLs parsed are stored in the url attribute.

Downloading the Firefox Geckodriver

Linux

See this link for a good explanation. In short, the steps are:

  1. Download the geckodriver from the mozilla GitHub release page, note to change the X for the version you want to download

    wget https://github.com/mozilla/geckodriver/releases/download/vX.XX.X/geckodriver-vX.XX.X-linux64.tar.gz
  2. Extract the file with

    tar -xvzf geckodriver*
  3. Make it executable

    chmod +x geckodriver
  4. In the last step, the driver can be added to the PATH environment variable, moved to the usr/local/bin folder, or can be given as full path to the Webdriver class as exe_path argument

    export PATH=$PATH:/path-to-extracted-file/
    sudo mv geckodriver /usr/local/bin/