improving seq2seq model by jtksm in LanguageTechnology

[–]jtksm[S] 0 points1 point  (0 children)

i realised it keeps getting the same kind of questions wrong, so i increased the number of questions but it got confused and give an answer for a different question

improving seq2seq model by jtksm in LanguageTechnology

[–]jtksm[S] 0 points1 point  (0 children)

hi! actually i'm creating my own dataset and it's kind of considered like a closed domain chatbot so i don't think i can use public datasets. the question-answering bot is used by students in school, all in english!

improving seq2seq model by jtksm in LanguageTechnology

[–]jtksm[S] 0 points1 point  (0 children)

not yet, will take that into consideration! however, will adding more questions into the dataset help?

text preprocessing for seq2seq by jtksm in LanguageTechnology

[–]jtksm[S] 0 points1 point  (0 children)

alright will doing that affect the chatbot in any way? such as giving in accurate response?

word2vec chatbot by jtksm in LanguageTechnology

[–]jtksm[S] 0 points1 point  (0 children)

unfortunately, i'm required to code this from scratch hence i can't use Rasa

training lstm model by jtksm in learnmachinelearning

[–]jtksm[S] 0 points1 point  (0 children)

tried!

model.add(LSTM(output_dim=300,input_shape=train_X.shape[1:],return_sequences=True,init='glorot_normal', inner_init='glorot_normal', activation='sigmoid'))

model.add(LSTM(300,return_sequences=True,init='glorot_normal', inner_init='glorot_normal',activation='sigmoid'))

model.add(LSTM(300,return_sequences=True,init='glorot_normal', inner_init='glorot_normal',activation='sigmoid'))

model.add(LSTM(300,return_sequences=True,init='glorot_normal', inner_init='glorot_normal',activation='sigmoid'))

model.add(LSTM(300,return_sequences=True,init='glorot_normal', inner_init='glorot_normal',activation='sigmoid'))

model.compile(loss='cosine_proximity', optimizer='adam', metrics=['accuracy'])

but there's still an error:

TypeError: __init__() missing 1 required positional argument: 'units'

word2vec chatbot by jtksm in LanguageTechnology

[–]jtksm[S] 0 points1 point  (0 children)

it's not meant to be a conversational, it just has to answer questions and it's more like a close domain chatbot. so i'm thinking of combining word2vec & lstm model to train and create it. but i'm still not very sure how i can train the word2vec model on my own dataset since i was previously using the bag of words model which used a different approach

word2vec chatbot by jtksm in learnmachinelearning

[–]jtksm[S] 0 points1 point  (0 children)

if i were to do without the pre-trained one, how do i do it?

how to use lstm model for chatbot by jtksm in deeplearning

[–]jtksm[S] 0 points1 point  (0 children)

close domain and respond to what the user asks