Skip to content

Python code for scraping, importing, processing data in the trust claims database

Notifications You must be signed in to change notification settings

Whats-Cookin/trust-claim-data-pipeline

Repository files navigation

Data Pipelines for Trust Claims

This repo includes code for spidering and importing claims, and ALSO for processing new claims entered by users.

For Processing Server

./run_pipe.py runs in crontab of the backend server

We are working on a microservice that the Node server running trust_claim_backend can call as each new claim is added, currently the crontab just updates every 5 min

For Spider

python code to run separate steps of the pipeline, and later maybe to orchestrate

  1. spider and save raw data to be turned into claims

  2. clean and normalize the data into an importable format

  3. import into signed claims (signed by our spider)

That's all the import data pipeline

Then

  1. dedupe, parse and decorate claims into nodes and edges

The nodes and edges will be used to feed the front end views

To publish to ceramic

. penv/bin/activate
source .env
python3 ./run_publisher.py

Basic Program Architecture

Program Architecture

About

Python code for scraping, importing, processing data in the trust claims database

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published