Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fnn and bnn model both get print out like "Loading fairness results failed!" #260

Open
littlebeanbean7 opened this issue Nov 7, 2024 · 5 comments

Comments

@littlebeanbean7
Copy link

Hello,

I am running the first example on "quick start": python -u main.py -data ../data/raw/dblp/toy.dblp.v12.json -domain dblp -model fnn bnn -fairness det_greedy -attribute popularity.

It seems to be able to run. But I notice that in the printout there are many "Loading fairness results failed!". Below I paste the last part of the printout. Could you please explain whether this warning "Loading fairness results failed!" is expected, Or does it indicate that my run is unsuccessful? Thank you!

Reranking for the baseline ../output/toy.dblp.v12.json/bnn/t31.s10.m13.l[128].lr0.1.b128.e5.nns3.nsunigram_b.s1.lossSL/t31.s10.m13.l[128].lr0.1.b128.e5.nns3.nsunigram_b.s1.lossSL/f1.test.pred ...
Loading popularity labels ...
Loading reranking results ...
100%|████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 205603.14it/s]
Loading fairness evaluation results before and after reranking ...
Loading fairness results failed! Evaluating fairness metric {'ndkl'} ...
100%|█████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 19152.07it/s]
Loading utility metric evaluation results before and after reranking ...
Pipeline for the baseline ../output/toy.dblp.v12.json/bnn/t31.s10.m13.l[128].lr0.1.b128.e5.nns3.nsunigram_b.s1.lossSL/t31.s10.m13.l[128].lr0.1.b128.e5.nns3.nsunigram_b.s1.lossSL/f1.test.pred completed by <_MainProcess name='MainProcess' parent=None started>! 0.007432222366333008
####################################################################################################
####################################################################################################
Reranking for the baseline ../output/toy.dblp.v12.json/bnn/t31.s10.m13.l[128].lr0.1.b128.e5.nns3.nsunigram_b.s1.lossSL/t31.s10.m13.l[128].lr0.1.b128.e5.nns3.nsunigram_b.s1.lossSL/f2.test.pred ...
Loading popularity labels ...
Loading reranking results ...
100%|████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 209715.20it/s]
Loading fairness evaluation results before and after reranking ...
Loading fairness results failed! Evaluating fairness metric {'ndkl'} ...
100%|█████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 19328.59it/s]
Loading utility metric evaluation results before and after reranking ...
Pipeline for the baseline ../output/toy.dblp.v12.json/bnn/t31.s10.m13.l[128].lr0.1.b128.e5.nns3.nsunigram_b.s1.lossSL/t31.s10.m13.l[128].lr0.1.b128.e5.nns3.nsunigram_b.s1.lossSL/f2.test.pred completed by <_MainProcess name='MainProcess' parent=None started>! 0.0077745914459228516

@littlebeanbean7 littlebeanbean7 changed the title First example of quick start get print out like "Loading fairness results failed!" fnn and bnn model both get print out like "Loading fairness results failed!" Nov 11, 2024
@hosseinfani
Copy link
Member

Hi @littlebeanbean7
Seems the main branch is not stable yet. Please work with cikm22 branch here: https://github.com/fani-lab/OpeNTF/tree/cikm22

@Hamedloghmani can you please look into this? We need to make it in a way that if fairness pipeline is not needed, nothing related to it also involves, even the packages.

@Hamedloghmani
Copy link
Member

Hamedloghmani commented Nov 11, 2024 via email

@littlebeanbean7
Copy link
Author

Good morning @hosseinfani thank you for kindly help! I checked out to cikm22 branch, when I run python -u main.py -data ../data/raw/dblp/toy.dblp.v12.json -domain dblp -model fnn, I get another error copied below. As a user outside of your lab, my need is: one stable branch with sample code showing how to run the function. The sample code needs to be updated and suitable for the stable branch. Thank you very much!

Building pytrec_eval input for 5 instances ...
Evaluating {'P_2,5,10', 'ndcg_cut_2,5,10', 'map_cut_2,5,10', 'recall_2,5,10'} ...
Averaging ...
Traceback (most recent call last):
File "main.py", line 124, in
run(data_list=args.data_list,
File "main.py", line 97, in run
aggregate(output)
File "main.py", line 62, in aggregate
dfff.set_axis(names, axis=1, inplace=True)
File "/mnt/data/lingling_env/envs/opentf/lib/python3.8/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/mnt/data/lingling_env/envs/opentf/lib/python3.8/site-packages/pandas/core/frame.py", line 4751, in set_axis
return super().set_axis(labels, axis=axis, inplace=inplace)
File "/mnt/data/lingling_env/envs/opentf/lib/python3.8/site-packages/pandas/core/generic.py", line 752, in set_axis
return self._set_axis_nocheck(labels, axis, inplace)
File "/mnt/data/lingling_env/envs/opentf/lib/python3.8/site-packages/pandas/core/generic.py", line 758, in _set_axis_nocheck
setattr(self, self._get_axis_name(axis), labels)
File "/mnt/data/lingling_env/envs/opentf/lib/python3.8/site-packages/pandas/core/generic.py", line 5500, in setattr
return object.setattr(self, name, value)
File "pandas/_libs/properties.pyx", line 70, in pandas._libs.properties.AxisProperty.set
File "/mnt/data/lingling_env/envs/opentf/lib/python3.8/site-packages/pandas/core/generic.py", line 766, in _set_axis
self._mgr.set_axis(axis, labels)
File "/mnt/data/lingling_env/envs/opentf/lib/python3.8/site-packages/pandas/core/internals/managers.py", line 216, in set_axis
self._validate_set_axis(axis, new_labels)
File "/mnt/data/lingling_env/envs/opentf/lib/python3.8/site-packages/pandas/core/internals/base.py", line 57, in _validate_set_axis
raise ValueError(
ValueError: Length mismatch: Expected axis has 18 elements, new values have 2 elements

@Hamedloghmani
Copy link
Member

Hamedloghmani commented Nov 14, 2024

Hi @hosseinfani , @littlebeanbean7

I pushed some changes that lets OpeNTF run without fairness arguments. As I checked, it works smoothly now but do not hesitate to let me know if the issues persist.
Please note:

  1. To run the pipeline without the fairness part, make sure you have removed 'fair' from params.py. Also, passing fairness arguments is totally optional now.
'cmd': ['train', 'test', 'eval'],  # 'train', 'test', 'eval', 'plot', 'agg', 'fair'
  1. In the first message of this issue, your run was successful. The phrase "Loading fairness results failed!" is just a message, not an error. It is due to our lazy load paradigm. We only calculate the fairness results if they are not already available.

I hope my explanation helps.

@littlebeanbean7
Copy link
Author

Hello @Hamedloghmani @hosseinfani, thank you and the team so much for helping to fix the issue! I pulled the latest code and I confirm that using the "main" branch, issue #260 and #263 are resolved -- I don't get 'labels' error and do not get "Loading fairness results failed!" message now. Would your team please also look into issues #259, #261, #262? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants