Skip to content

a spider bot (scrawler) by python, using selenium and chrome driver

License

Notifications You must be signed in to change notification settings

liujuanjuan1984/spiderbot

Repository files navigation

spiderbot

a spider bot, published as module spiderbot.

How to use?

Install

pip install spiderbot

install chrome browser and chromedriver, and put chromedrive binary into the PATH dir.

Config

Init the config_private.py, using config_private_sample.py as example, and update the value of XPATHS and DB_NAME arguments.

or,

When Generating an instance of SpiderBot, pass the value of xpaths and db_name arguments.

Run it

Init the database by pass the init=True to Generate an instance of SpiderBot. If successed, spiderbot.db was created.

from spiderbot import SpiderBot

bot = SpiderBot(skip_driver=True, init=True)

Then add users to scrawler. You can add users always as needed.

from spiderbot import SpiderBot

urls = ["https://example.com/user_a_homepage", "https://example.com/user_b_homepage"]

bot = SpiderBot()
bot.add_users(working_status=True, *urls)

At last, do the main job:

from spiderbot import SpiderBot

bot = SpiderBot()
bot.get_profiles()
bot.get_new_posturls()
bot.get_history_posturls(1, 9)
bot.get_posts()
bot.quit()

more examples

Code Format

isort .
black .
pylint spiderbot > pylint_spiderbot.log

About

a spider bot (scrawler) by python, using selenium and chrome driver

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages