Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on a different evaluation setting #192

Open
vefthym opened this issue Mar 14, 2021 · 1 comment
Open

Question on a different evaluation setting #192

vefthym opened this issue Mar 14, 2021 · 1 comment

Comments

@vefthym
Copy link

vefthym commented Mar 14, 2021

First of all, thanks for this amazing tool! It's surprisingly easy to run :)
This is a question, not a bug report.

I would like to use your tool in the following setting, and I am asking about the conf. that I should put in the yaml file:

I have a graph with multiple link types. I want to evaluate the trained models for only one of the link types and more specifically, I would ideally want to predict whether this link type exists or not in my test data.
I.e., my training data is of the form:
s1 r1 o1
s2 r* o2
s3 r2 o3
s4 r* o2
s4 r* o1

where r1, r2, r* are link types,
and my test data is of the form:
x1 r* y1
x2 r* y2
x2 r* y3

where the relation is always r*, and I want to get a YES/NO answer for my test data. If my understanding is correct, I think that the proper setting would be to set the training params:
sp_: False
s_o: True
_po: False

and consider correct the predictions suggesting r* as the relation type.
I know this is not giving a binary (YES/NO) classifier, but it seems closer to my problem setting. Is there a better alternative, without implementing a different classifier? Are there any other (non-default) params that I should change in the YAML that I copied from complex-train?

Sorry if this is not relevant as a github issue. It might helps others as well, though :)

@vefthym vefthym changed the title Question on a different evaluation setting (binary classifier) Question on a different evaluation setting Mar 14, 2021
@rgemulla
Copy link
Member

The best-performing setting of the training parameters / loss etc. is hard to conjecture; it may well be that your suggested setting does not generalize best, even though it seems more appropriate.

The entity_ranking evaluation of LibKGE currently cannot do what you want. It should be relatively straightforward to implement relation ranking (or relation classification) based on the current code, however. Contributions are welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants