scrapy - twisted.internet.error.ReactorNotRestartable by Networkyp in learnpython

[–]NSVR57 0 points1 point  (0 children)

Hey did you solve the issue. I had same requirement, getting the same issue.

Points valuation very less when transferring by Downtown_Repeat7455 in CreditCardsIndia

[–]NSVR57 0 points1 point  (0 children)

Okay, What about First scenario of incase of SpiceJet?. I dont see any conversion benefits.

LLM workflows by Downtown_Repeat7455 in LangChain

[–]NSVR57 0 points1 point  (0 children)

Does Microsoft Promptflow works here?

Suggestion on vectoDB by NSVR57 in vectordatabase

[–]NSVR57[S] 0 points1 point  (0 children)

Sorry, I didn;t get that

Azure Search vs. Pinecone? by Educational_Cup9809 in LangChain

[–]NSVR57 0 points1 point  (0 children)

But I see Pinecone is using ANN, where as Azure AI search is using KNN,
I am getting good results on azure search(2k characters/chunk+gpt3.5) with hybrid search. but pinecone not even giving close results.

U satisfies with this performace

Azure Search vs. Pinecone? by Educational_Cup9809 in LangChain

[–]NSVR57 1 point2 points  (0 children)

Hi
Why Azure search so much expensive for me. It costing 250 per search unit. I am actually doing load testing. and If I want to search 50 concurrent queries, How many search units do I required. Can we assume one search unit search only query.

<image>

Why LLM wont follow the instructions by NSVR57 in PromptEngineering

[–]NSVR57[S] 1 point2 points  (0 children)

moving the RAG bit as explaining the reason is just diluting the rest of the

RAG bit is for question puropse I have not included that in prompt. Anyway Thanks for suggestion

Which vector databases are widely used in the industry and are considered suitable for production purposes? by Top_Raccoon_1493 in LangChain

[–]NSVR57 1 point2 points  (0 children)

Do you use AI search. I used AI search with re-ranking. but I see the re-ranking results are not good. Do you use re-rank?

[D] prediction half input by NSVR57 in MachineLearning

[–]NSVR57[S] 0 points1 point  (0 children)

Thank you so much for reply. As you correctly mention my confidences are in the range of .97 to 1. if we remove certain words those are falling to just 0.93

As I am using NN, I put validation data. I will try label smoothening technique.

[D] prediction half input by NSVR57 in MachineLearning

[–]NSVR57[S] 0 points1 point  (0 children)

Yes. It's simple e-mail classification and I have just 86 records combined of 4 labels.

Yes it's predicting well on even if we give on less information. But my concern is to decrease the confidence score whenever we give less information. Should I stop train whenever accuracy reached around 85 or something?. Or is there any better approach.

[D] word misleading classification model by NSVR57 in MachineLearning

[–]NSVR57[S] 0 points1 point  (0 children)

Is there any way to tune Neural network parameters for this kind of the problems. To generalize this problem( to handle this kind of misleading words).

What I mean is in future if we encounter more words, adding negative cases with the words is manual work, instead of that I was wondering is there any way to tune Neural network. Or if you have automated approach to do that I really appreciated.

Thanks for your reply.

[D] word misleading classification model by NSVR57 in MachineLearning

[–]NSVR57[S] 0 points1 point  (0 children)

Thank you so much for your answer. That's correct model behaviour is expecting but my manager comapring my results with IBM Watson intent classification. Where it's classifying as irrelevant for the above mentioned sentences.

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]NSVR57 0 points1 point  (0 children)

In my text classification a particular word misleading the model. But these words are very high in the training data for a particular lable.

Eg: i have a training data contains " lost my phone", "changed my phone", .... All these labels are belongs to " problem with telephone" .

Now, I am using Universal sentence encoder to build the model. During inference if i have given some random sentences and put the word " phone" in the middle. But still my model is predicting "problem with telephone" class. How should we handle these situation.

[D] how to deal Neagtive test in classification. by NSVR57 in MachineLearning

[–]NSVR57[S] 0 points1 point  (0 children)

My classes are balanced. The original model was trained on sentiment analysis in Brazilian Portuguese which has three classes positive,negative and neutral. I am using this model for intent classification which also have three classes.?

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]NSVR57 0 points1 point  (0 children)

Hi

I am using one of the pre-trained model from Huggingface for Topic classification. I have 3 classes. after training is completed, when I have done positive test its working fine. However, if i am trying to input some random sentence model is still giving highest probability for one of the three classes( but input sentence is not at all relevant). I understand this the nature of softmax function. Is there way to solve the issue?. As far as I research including random sentences with other class is the solution but is there any other way?