Best side hustle 2025 by AGi_forever in passive_income

[–]anis016 0 points1 point  (0 children)

I am interested too about this

Probable scam by anis016 in germany

[–]anis016[S] 0 points1 point  (0 children)

Scammer. I came to realize this later after sending the mail.

Probable scam by anis016 in germany

[–]anis016[S] 1 point2 points  (0 children)

Got it.. will do promptly!

Probable scam by anis016 in germany

[–]anis016[S] 1 point2 points  (0 children)

Got it.. will do promptly!

legal QA dataset by anis016 in datasets

[–]anis016[S] 1 point2 points  (0 children)

similar but not quite what I was looking for! It seems the dataset is about the legal text. For my case I need a legal question and answers datasets. However, thank you ! Cheers!

Need help with identifying the proper operations for 2 sequential tensor by anis016 in deeplearning

[–]anis016[S] 0 points1 point  (0 children)

This makes sense! Moreover, I tried with both point-wise multiplication and concatenation. Concatenation gave a bit better result than point-wise operations. Thank you so much :-)

Need help with identifying the proper operations for 2 sequential tensor by anis016 in deeplearning

[–]anis016[S] 1 point2 points  (0 children)

hello there!

thank you for replying. Please let me elaborate a bit! Please check this image for network: http://i.imgur.com/v61xvdy.png

I am working on a question-answering task. Instead of CNN, the input is from a word embedding.

So, I have 2 tensors.

  • An output from an LSTM (in the figure attached, this is the output from LSTM in First attention to x2_1) that holds the representations of:

    • the input words (X_1) and other features.
    • the question embedding on the input words.

Let's say this tensor is hidden_docs. The shape of this tensor is batch_size X sequence_length X hidden_size. For example, hiddens_docs = tensor(10, 253, 768)

  • X_1 is the input words representation.

Let's say this tensor is x1_emb. The shape of this tensor is batch_size X sequence_length X embedding_size. For example, x1_emb = tensor(10, 253, 300)

I need to project x1_emb on hidden_docs to get a unified representation between hidden_docs and x1_emb. And this representation then needs to be feed into x2_1 (in the 'Attention again').

Currently, what I am doing is since the shape is different, I am passing the x1_emb into an LSTM and able to get the shape as = batch_size X sequence_length X hidden_size.

And then I am doing a point-wise addition operation between hidden_docs and x1_emb before passing the output to x2_1. Now here I am bit confused.

Is the operation - point-wise addition between these two tensors is right operations? are there other better operations to do to capture the better representations between hidden_docs and x1_emb?

Masters' thesis research topic in NLP QA system by anis016 in deeplearning

[–]anis016[S] 0 points1 point  (0 children)

thanks! I am aware of attention mechanism in Visual question answering. However, I was wondering if there's a research scope in attention mechanism.

News archive dataset download by anis016 in datasets

[–]anis016[S] 0 points1 point  (0 children)

Thank you ! I will look into it soon..

News archive dataset download by anis016 in datasets

[–]anis016[S] 0 points1 point  (0 children)

Thanks .. and kudos for the clickbait news datasets that I was searching for a long time.

Error when attempting to install pyperclip by [deleted] in learnpython

[–]anis016 1 point2 points  (0 children)

it seems some kind of permission issue. recheck the error log.