Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues about Positional Embedding and Reference Point #32

Open
tae-mo opened this issue Oct 22, 2022 · 1 comment
Open

Issues about Positional Embedding and Reference Point #32

tae-mo opened this issue Oct 22, 2022 · 1 comment

Comments

@tae-mo
Copy link

tae-mo commented Oct 22, 2022

Hi, thanks for sharing your wonderful work.

I got a question in here,

def gen_sineembed_for_position(pos_tensor):

which embedes positional information in the query_pos.

however, I don't understand the reason why does 2*(dim_t//2) has to be devided by 128, instead of the actual dimension pos_tensor has (e.g., 256 by default).

dim_t = 10000 ** (2 * (dim_t // 2) / 128)

Is it works correctly even dim_t is divided by 128?

I would appreciate to be corrected !

And another question is,
when we do the calculation of the equation (1) in the paper,

tmp[..., :2] += reference_before_sigmoid

can I understand that the model would learn "offsets" from the corresponding reference points?
what is precise role of the reference points?

Thank you!

@tae-mo tae-mo changed the title Issues about Positional Embedding Issues about Positional Embedding and Reference Point Oct 22, 2022
@Run542968
Copy link

Hi, for question (1), why does 2*(dim_t//2) has to be devided by 128, since the position embedding performs on both the x and y direction, then concat.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants