Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shorten the README #383

Merged
merged 4 commits into from
Jan 4, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
109 changes: 39 additions & 70 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,91 +32,60 @@

Currently, we are supporting Python 3. There are several ways to install Cornac:

- **From PyPI (you may need a C++ compiler):**

```sh
pip3 install cornac
```

- **From Anaconda:**

```sh
conda install cornac -c conda-forge
```

- **From the GitHub source (for latest updates):**

```sh
pip3 install Cython
git clone https://github.com/PreferredAI/cornac.git
cd cornac
python3 setup.py install
```
- **From PyPI (you may need a C++ compiler):**
```bash
pip3 install cornac
```

- **From Anaconda:**
```bash
conda install cornac -c conda-forge
```

- **From the GitHub source (for latest updates):**
```bash
pip3 install Cython
git clone https://github.com/PreferredAI/cornac.git
cd cornac
python3 setup.py install
```

**Note:**

Additional dependencies required by models are listed [here](README.md#Models).

Some algorithm implementations use `OpenMP` to support multi-threading. For OSX users, in order to run those algorithms efficiently, you might need to install `gcc` from Homebrew to have an OpenMP compiler:

```sh
Some algorithm implementations use `OpenMP` to support multi-threading. For Mac OS users, in order to run those algorithms efficiently, you might need to install `gcc` from Homebrew to have an OpenMP compiler:
```bash
brew install gcc | brew link gcc
```

If you want to utilize your GPUs, you might consider:

- [TensorFlow installation instructions](https://www.tensorflow.org/install/).
- [PyTorch installation instructions](https://pytorch.org/get-started/locally/).
- [cuDNN](https://docs.nvidia.com/deeplearning/sdk/cudnn-install/) (for Nvidia GPUs).

## Getting started: your first Cornac experiment

![](flow.jpg)
<p align="center"><i>Flow of an Experiment in Cornac</i></p>

Load the built-in [MovieLens 100K](https://grouplens.org/datasets/movielens/100k/) dataset (will be downloaded if not cached):

```python
import cornac

ml_100k = cornac.datasets.movielens.load_feedback(variant="100K")
```

Split the data based on ratio:

```python
rs = cornac.eval_methods.RatioSplit(data=ml_100k, test_size=0.2, rating_threshold=4.0, seed=123)
```

Here we are comparing `Biased MF`, `PMF`, and `BPR`:

```python
mf = cornac.models.MF(k=10, max_iter=25, learning_rate=0.01, lambda_reg=0.02, use_bias=True, seed=123)
pmf = cornac.models.PMF(k=10, max_iter=100, learning_rate=0.001, lambda_reg=0.001, seed=123)
bpr = cornac.models.BPR(k=10, max_iter=200, learning_rate=0.001, lambda_reg=0.01, seed=123)
```

Define metrics used to evaluate the models:

```python
mae = cornac.metrics.MAE()
rmse = cornac.metrics.RMSE()
prec = cornac.metrics.Precision(k=10)
recall = cornac.metrics.Recall(k=10)
ndcg = cornac.metrics.NDCG(k=10)
auc = cornac.metrics.AUC()
mAP = cornac.metrics.MAP()
```

Put everything together into an experiment and run it:

```python
cornac.Experiment(
eval_method=rs,
models=[mf, pmf, bpr],
metrics=[mae, rmse, recall, ndcg, auc, mAP],
user_based=True
).run()
from cornac.eval_methods import RatioSplit
from cornac.models import MF, PMF, BPR
from cornac.metrics import MAE, RMSE, Precision, Recall, NDCG, AUC, MAP

# load the built-in MovieLens 100K and split the data based on ratio
ml_100k = cornac.datasets.movielens.load_feedback()
rs = RatioSplit(data=ml_100k, test_size=0.2, rating_threshold=4.0, seed=123)

# initialize models, here we are comparing: Biased MF, PMF, and BPR
models = [
MF(k=10, max_iter=25, learning_rate=0.01, lambda_reg=0.02, use_bias=True, seed=123),
PMF(k=10, max_iter=100, learning_rate=0.001, lambda_reg=0.001, seed=123),
BPR(k=10, max_iter=200, learning_rate=0.001, lambda_reg=0.01, seed=123),
]

# define metrics to evaluate the models
metrics = [MAE(), RMSE(), Precision(k=10), Recall(k=10), NDCG(k=10), AUC(), MAP()]

# put it together in an experiment, voilà!
cornac.Experiment(eval_method=rs, models=models, metrics=metrics, user_based=True).run()
```

**Output:**
Expand Down
47 changes: 15 additions & 32 deletions examples/first_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,40 +15,23 @@
"""Your very first example with Cornac"""

import cornac
from cornac.eval_methods import RatioSplit
from cornac.models import MF, PMF, BPR
from cornac.metrics import MAE, RMSE, Precision, Recall, NDCG, AUC, MAP


# Load MovieLens 100K dataset
# load the built-in MovieLens 100K and split the data based on ratio
ml_100k = cornac.datasets.movielens.load_feedback()
rs = RatioSplit(data=ml_100k, test_size=0.2, rating_threshold=4.0, seed=123)

# Split data based on ratio
rs = cornac.eval_methods.RatioSplit(
data=ml_100k, test_size=0.2, rating_threshold=4.0, seed=123
)

# Here we are comparing biased MF, PMF, and BPR
mf = cornac.models.MF(
k=10, max_iter=25, learning_rate=0.01, lambda_reg=0.02, use_bias=True, seed=123
)
pmf = cornac.models.PMF(
k=10, max_iter=100, learning_rate=0.001, lambda_reg=0.001, seed=123
)
bpr = cornac.models.BPR(
k=10, max_iter=200, learning_rate=0.001, lambda_reg=0.01, seed=123
)
# initialize models, here we are comparing: Biased MF, PMF, and BPR
models = [
MF(k=10, max_iter=25, learning_rate=0.01, lambda_reg=0.02, use_bias=True, seed=123),
PMF(k=10, max_iter=100, learning_rate=0.001, lambda_reg=0.001, seed=123),
BPR(k=10, max_iter=200, learning_rate=0.001, lambda_reg=0.01, seed=123),
]

# Define metrics used to evaluate the models
mae = cornac.metrics.MAE()
rmse = cornac.metrics.RMSE()
prec = cornac.metrics.Precision(k=10)
recall = cornac.metrics.Recall(k=10)
ndcg = cornac.metrics.NDCG(k=10)
auc = cornac.metrics.AUC()
mAP = cornac.metrics.MAP()
# define metrics to evaluate the models
metrics = [MAE(), RMSE(), Precision(k=10), Recall(k=10), NDCG(k=10), AUC(), MAP()]

# Put it together into an experiment and run
cornac.Experiment(
eval_method=rs,
models=[mf, pmf, bpr],
metrics=[mae, rmse, prec, recall, ndcg, auc, mAP],
user_based=True,
).run()
# put it together in an experiment, voilà!
cornac.Experiment(eval_method=rs, models=models, metrics=metrics, user_based=True).run()