Skip to content

Repo for the PLANE benchmark, introduced in the COLING 2022 paper: Testing Large Language Models on Compositionality and Inference with Phrase-Level Adjective-Noun Entailment

Notifications You must be signed in to change notification settings

lorenzoscottb/PLANE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 

Repository files navigation

PLANE: a dynamyc resource for compositional entailment ✈️

The repo contians the PLANE resource, and the training-test splits used in the supervised leanring experiments of the COLING 2022 paper Testing Large Language Models on Compositionality and Inference with Phrase-Level Adjective-Noun Entailment

🤗 Dataset

You can also use the train/test splits used in the supervised experiments via hugging face datasets library:

from datasets import load_dataset

dataset = load_dataset("lorenzoscottb/PLANE-ood")

You can find the dataset with its card here.

🤗 Tuned Model

A pre-trend BERT model (on the 2nd out-of-distribution split) here, and can be direclty used via the transformers library's pipeline

from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline

model_name = "lorenzoscottb/bert-base-cased-PLANE-ood-2"
tokenizer  = AutoTokenizer.from_pretrained(model_name)
model      = AutoModelForSequenceClassification.from_pretrained(model_name)

test_inferences = [
    "A red car is a vehicle",
    "A small cat is a small mammal",
    "A fake smile is a smile",
]

classifier = pipeline(
    task="text-classification", 
    model=model, 
    tokenizer=tokenizer,
)

predictions = classifier(test_inferences)

Cite

If you use PLANE for your work, please cite the main COLING 2022 paper.

@inproceedings{bertolini-etal-2022-testing,
    title = "Testing Large Language Models on Compositionality and Inference with Phrase-Level Adjective-Noun Entailment",
    author = "Bertolini, Lorenzo  and
      Weeds, Julie  and
      Weir, David",
    booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
    month = oct,
    year = "2022",
    address = "Gyeongju, Republic of Korea",
    publisher = "International Committee on Computational Linguistics",
    url = "https://aclanthology.org/2022.coling-1.359",
    pages = "4084--4100",
}

About

Repo for the PLANE benchmark, introduced in the COLING 2022 paper: Testing Large Language Models on Compositionality and Inference with Phrase-Level Adjective-Noun Entailment

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published