Skip to content

Latest commit

 

History

History
30 lines (21 loc) · 1.38 KB

README.md

File metadata and controls

30 lines (21 loc) · 1.38 KB

Landcover segmentation with Segment Anything distilled to Linknet experiment

One of the significant issues with Segment Anything is its weight and inference time. To predict one frame you need 75 sec on Intel i5 CPU or 6 sec on Nvidia P100 GPU.

In this experiment I trained Linknet to act like SAM and got 0.25 sec/frame on a 768x768 image instead of 75 sec/frame with the original SAM model on Intel i5 CPU. It means about 300 times the speedup.

But there are a lot of constraints:

  • it works only for a particular territory
  • it works only with particular zoom level
  • it still not so accurate

Automatic segmentation example

Experiment pipeline

  1. Grab imagery 1_get_data.ipynb
  2. Make prediction with SAM and store results 2_predict_data_with_sam.ipynb
  3. Train Linknet model 3_train_linknet.ipynb
  4. Make a new data sample with active learning approach 4_mine_new_data_and_look_on_results.ipynb
  5. Tune Linknet model 5_tune_linknet.ipynb

Automatic segmentation example Automatic segmentation example

Google Colab or Kaggle Notebooks enough to reproduce experiments.