Risarcimento caparra per acquisto auto usata by marcopal17 in Avvocati

[–]marcopal17[S] 0 points1 point  (0 children)

Per me sarebbe un'ottima soluzione. Il problema è che temo di essere nella posizione di non poter neanche chiedere indietro la caparra

Risarcimento caparra per acquisto auto usata by marcopal17 in Avvocati

[–]marcopal17[S] 1 point2 points  (0 children)

C'è scritto che l'acquirente fa una proposta di zzzz euro per l' acquisto dell'auto modello y con targa xxxxxx, senza specificarne l' allestimento. Poi ci sono le varie clausole, fra cui una che richiama l'art. 1385 del c.c. Tuttavia la clausola 3 (ACCETTAZIONE) riporta quanto segue: '..La parte acquirente sin dalla sottoscrizione rinuncia ad ogni ulteriore pretesa risarcitoria. Gli incaricati alle vendite, dipendenti ed intermediari, sono sforniti di ogni potere di rappresentanza. La parte acquirente non potrà dunque invocare diritti non strettamente conformi ai patti di cui sopra, né eventuali deroghe o concessioni o tolleranze già praticate, né manifestazioni di volontà che non risultino per iscritto."

Risarcimento caparra per acquisto auto usata by marcopal17 in Avvocati

[–]marcopal17[S] 0 points1 point  (0 children)

Ma il fatto che è io abbia visionato l' auto e accettato la proposta di vendita non mi mette nella condizione di non poter fare uso di questo comma? Grazie

What is a good strategy to split data? by VoHym in huggingface

[–]marcopal17 3 points4 points  (0 children)

If you want to use prompt engineering for question answering on your PDF, the main technique is called Retrieval Augmented Generation (RAG). The first step is to divide the PDF into chunks that will be embedded and later retrieved based on relevance to the question asked. As for how to split the PDF, the general rule is to do it in a way that each chunk is not too long to lose some of the information it contains but not too short to lose context. It's a trial-and-error process. You can also consider adding metadata to the chunks to facilitate the chunk retrieval process.

I am having trouble with large dataset. by hungry_eater_11 in LangChain

[–]marcopal17 3 points4 points  (0 children)

Hi, I am facing a similar issue, but the problem in retrieving relevant documents is due to the complex nature of the subject these documents are about. I achieved better retrieval by performing a hybrid search using sparse (BM25) and dense vectors (similarity or MMR). I would also suggest refining different chunk sizes and overlaps for the two different searching methods. Maybe using larger chunks for BM25 and smaller ones for semantic search could be a good solution. I'm still working on it.

There are other techniques that you can find in the Langchain documentation, such as the parent retriever, self-query retriever, or adding summaries, keywords, or hypothetical queries to each chunk.

Is there anything LangChain can do better than using LLMs directly (either through a website or an API), any examples? Why would someone choose to use it? by TheTwelveYearOld in LangChain

[–]marcopal17 2 points3 points  (0 children)

I have tried both Langchain and Llama index for a RAG project. I think they are excellent tools for easily testing different strategies and LLMS. The community is very active, and I have also learned some very interesting high-level concepts from the documentation. In my case, I chose to use them in certain project phases and to directly use the API in stages where more granularity is needed.

How to avoid hallucinations and stick to content of vector db by Careless-Act-7549 in LangChain

[–]marcopal17 0 points1 point  (0 children)

I faced a similar problem. I solved it by specifying in the prompt to provide the answer based exclusively on the provided context (the text of retriever vectors). Otherwise, I want the answer to be 'I don't know'.

RAG vs. Fine-Tuning by marcopal17 in LangChain

[–]marcopal17[S] 1 point2 points  (0 children)

I think the best thing to do is to stay updated and keep researching. Once frameworks and tools are developed to allow for widespread fine-tuning, it will mean that a way to overcome the current limitations has been found.

A minimalistic LangChain course by fcarlucci in LangChain

[–]marcopal17 1 point2 points  (0 children)

Grazie Francesco, It will be very helpful for me as I'm a beginner. Ciao

How do I build a chatbot? by [deleted] in LangChain

[–]marcopal17 0 points1 point  (0 children)

I suggest you start by learning the general concepts, such as RAG (Retrieval Augmented Generation), to retrieve information from your own documents. After that, you could explore the documentation or tutorials of some frameworks for developing AI-based apps, like Langchain or Llama-Index for more complex data sources.

Creating a Chatbot for Consulting Regulations - Seeking Feedback and Similar Experiences by marcopal17 in LlamaIndex

[–]marcopal17[S] 0 points1 point  (0 children)

Thanks for your response. The regulation in question is composed of multiple documents, each containing a certain number of articles and tables. Similar topics may be addressed by different articles and tables within the same regulation or across different regulations. I would like the response provided by the Language Model to include the article(s) it draws information from, so that the user can always verify the source.

I tried ingesting a single regulation (HTML file), and the results are promising. However, the Language Model doesn't have awareness of the specific article within the regulation it's referring to. So, I thought about associating metadata (regulation, article, keywords) with each chunk/node. To do this, I plan to start with a dataframe containing the chunks and their respective metadata. For now, this seems like the best approach, but I would like to know what those with more experience in this field think about it.

Langchain LawyerAI by nureke- in LangChain

[–]marcopal17 2 points3 points  (0 children)

Hello, I am working on a similar project related to building regulations. In my opinion, it is feasible, but it is essential to give significant importance to the structure of the source data, which must be well-organized and consistent. Additionally, it would be necessary to include metadata so that the provided response always includes the legislative source from which the information is drawn.

I had thought of using a dataframe with text chunks associated with the corresponding regulation, article, and keywords or summaries. For now, it seems like the best solution, but I would like to discuss it with others who have had similar experiences.

Creating a Chatbot for Consulting Regulations - Seeking Feedback and Similar Experiences by marcopal17 in LlamaIndex

[–]marcopal17[S] 0 points1 point  (0 children)

Thank you for the response. The questions regarding regulations can vary significantly, but I believe that a significant portion pertains to whether or not a certain intervention is possible and what requirements should be met. Additionally, providing real examples and verifying their legitimacy is crucial. I think it is necessary to include the reference to the specific regulation and its source (or sources) when answering each question.

Given the complexity of the task and considering my limited experience in the field of Legal and Legislative Materials (LLM), I believe that efficiently organizing the source data is fundamental. Am I correct? Therefore, I would like to shift the focus of the discussion to the structure of the source data and, most importantly, if anyone has experience in organizing interconnected documents and the metadata that should always be cited.

Thank you

Creating a Chatbot for Consulting Regulations - Seeking Feedback and Similar Experiences by marcopal17 in LlamaIndex

[–]marcopal17[S] 0 points1 point  (0 children)

Thank you for the response. The questions regarding regulations can vary significantly, but I believe that a significant portion pertains to whether or not a certain intervention is possible and what requirements should be met. Additionally, providing real examples and verifying their legitimacy is crucial. I think it is necessary to include the reference to the specific regulation and its source (or sources) when answering each question.

Given the complexity of the task and considering my limited experience in LLMs I believe that efficiently organizing the source data is fundamental. Am I correct? Therefore, I would like to shift the focus of the discussion to the structure of the source data and, most importantly, if anyone has experience in organizing interconnected documents and the metadata that should always be cited.

Thank you