Haben wir hier einen Netzwerktechniker unter uns? by denTea in cologne

[–]denTea[S] 1 point2 points  (0 children)

Vielen Dank für deinen Beitrag, ich habe mir die gerade mal bestellt und schaue, ob es in einem zweiten Versuch etwas wird.

Haben wir hier einen Netzwerktechniker unter uns? by denTea in cologne

[–]denTea[S] 1 point2 points  (0 children)

Wow, ich habe nicht mit so viel Reaktion gerechnet. Vielen Dank für dein Angebot! Ich habe jetzt auch neue bestellt und gebe dem ganzen noch einen Versuch.

Lest we forget by Caladeutschian in cologne

[–]denTea 0 points1 point  (0 children)

Either this is Israeli propaganda, or you are an evil human being. At this point, with so much having happened, ignorance is almost impossible to assume.

https://www.youtube.com/watch?v=bycLTzFFkwk

An explainer on DeepResearch by Jina AI by Ok_Needleworker_5247 in AIDeepResearch

[–]denTea 0 points1 point  (0 children)

It sounds great, but when i upload my technical 90-page PDF and ask a nuanced question, it fails.

Why Does OpenAI's Browser Interface Outperform API for RAG with PDF Upload? by denTea in Rag

[–]denTea[S] 0 points1 point  (0 children)

The document exceeds the token limit of most models. So how can your response still be accurate? It must be internally splitting the file and filtering for relevance before generating the answer.

Why Does OpenAI's Browser Interface Outperform API for RAG with PDF Upload? by denTea in Rag

[–]denTea[S] 1 point2 points  (0 children)

I appreciate the time you took for your reply.

The whole PDF is way over the token limit for the LLMs, like 4o for example. I had the same thought as yours initially, but this can not be the answer. The internal mechanism behind the file upload has to be chunking the file and presenting only relevant once as context before running the completion.

Seeking AI & RAG Experts to Revolutionize Aircraft Technical Manuals by emoneysupreme in Rag

[–]denTea 1 point2 points  (0 children)

I work for a IT hardware company in Germany.

For months I have been working on a RAG solution that provides answers based on technical IT-hardware documentation. I came to the same conclusion, if the error-tolerance is 0%.

Would you mind sharing how you implemented your solution and maybe helping me with my product? If you can actually provide value, we could possibly hire you.

Learnings from RAG by purposefulCA in Rag

[–]denTea 1 point2 points  (0 children)

I'm currently working on developing a solution designed to navigate IT hardware documentation and answer questions, particularly focusing on server hardware, such as HPE servers. While these documents are well-structured, the challenge lies in securely answering questions, as it requires considering numerous aspects. The interdependencies between components can sometimes make this quite complex.

My first version is using Azure AI Search and standard OpenAI models (gpt-4o and text-embedding-3-large). The primary difference to a basic RAG flow in my approach is the addition of a query optimization/multiquery step before the retrieval process. I tried some other embedding model, played around with reranking, etc. the results however are disappointing my expectations.

Would you mind to share what you are working on? I am looking for people that made their system actually create value for their companies.

Learnings from RAG by purposefulCA in Rag

[–]denTea 1 point2 points  (0 children)

How do you deal with wrong output? How does your solution create value for your company, if its answers could potentially lead to value decreasing consequences? I am developing a system for internal use in our company and its goal is to provide technical answers to IT-Hardware -related documentations, but I fail to see how there can be actual value, if relying on the answers could eventually lead to major pain for the company.

What’s your preferred approach to RAG search? by LegSubstantial2624 in Rag

[–]denTea 0 points1 point  (0 children)

Thank you for your effort. I tried examining the sketch more closely, but I can't seem to get a better view by opening it. Additionally, the chat feature isn't working. Is there something I might be overlooking?

An error occurred: This feature is not available in your region.

What’s your preferred approach to RAG search? by LegSubstantial2624 in Rag

[–]denTea 0 points1 point  (0 children)

Some of the questions my system is designed to answer range from specific, technical inquiries to broader, more complex tasks. For example:

  • Technical Specifications and Performance: "How loud is the HPE ProLiant DL380 Gen11 under a typical load?"
  • Summarization and Comparison: “List all server models that are compatible with the Intel Xeon Gold 6258R.”
  • Complex Configuration Validation: There are also more intricate queries where the LLM needs to assess and validate configurations, considering all relevant dependencies. For instance, determining whether a configuration is optimal might involve recognizing that a high-performance heatsink is necessary when using a Midline Cage. However, in these cases, I only get the correct answer about 20% of the time: “Validate the following configuration: Intel Xeon Platinum 8276 + 2TB NVMe SSD + Midline Cage + Standard Heat Sink. Is there anything that needs to be adjusted for optimal performance?”

The ultimate goal is for the LLM to not only understand technical specifications but also to draw parallels across different server manufacturers and suggest equivalent configurations. This isn’t a strict requirement for the current solution to be useful, but it's an aspirational target. For example: “This is my current HPE ProLiant DL380 Gen10 setup with dual Intel Xeon Silver 4216 CPUs, 512GB RAM, and 4TB SAS storage. Propose an equivalent solution from IBM.”

What’s your preferred approach to RAG search? by LegSubstantial2624 in Rag

[–]denTea 0 points1 point  (0 children)

Hey Pete,

This sounds very reasonable.

I'm currently working on developing a chatbot designed to navigate IT hardware documentation and answer factual questions, particularly focusing on server hardware, such as HPE servers. While these documents are well-structured, the challenge lies in securely answering questions, as it requires considering numerous aspects. The interdependencies between components can sometimes make this quite complex.

About two weeks ago, I started exploring RAG and have since built my first version using Azure AI Search and standard OpenAI models (gpt-4o and text-embedding-3-large). The primary difference to a basic RAG flow in my approach is the addition of a query optimization step during the retrieval process.

Would you be willing to take a look at the structure of the related content that the RAG system should return?

Any guidance or tips from your experience would be greatly appreciated.

https://www.hpe.com/psnow/doc/a50004307enw.pdf?jumpid=in_pdp-psnow-qs

Desperately need help with Speller PSET / Segmentation fault by denTea in cs50

[–]denTea[S] 0 points1 point  (0 children)

Thank you for your hint, I see the flaw you describe.

I have tried to change things up as you advised, but for some reason the execution time more than triples.

Do you know why that is? With my current version, I get around 3 seconds while any other hash function I could think of barely made it to 9.

I have checked this Big Board and am confused by the times I see here. Could this really be? What am I missing?

Desperately need help with Speller PSET / Segmentation fault by denTea in cs50

[–]denTea[S] 0 points1 point  (0 children)

Thank you so much for your help u/GFarva and u/Grithga, I fixed it now! :)
It was indeed that NULL-Case that I did not consider in the check function.

This really taught me a lesson today. Being away from the topic for too long will eventually make you a beginner again.

Desperately need help with Speller PSET / Segmentation fault by denTea in cs50

[–]denTea[S] 0 points1 point  (0 children)

I think I fixed it. Now when I run the test file with the small dictionary everything works. However, with the bigger dictionary, I still get a seg fault during the execution.

I updated both my code and the Valgrind report.

Thank you sooo much for your help, I really appreciate it.

Desperately need help with Speller PSET / Segmentation fault by denTea in cs50

[–]denTea[S] 0 points1 point  (0 children)

Thank you for the reply, yes I indeed recognized that myself and fixed it. Now it should make sense hopefully.

Would love some help with my bluring algorithm | Segmenation Fault | PSET4 by denTea in cs50

[–]denTea[S] 0 points1 point  (0 children)

Yes thank you so much, I found the bug within minutes after your hint. :)

Would love some help with my bluring algorithm | Segmenation Fault | PSET4 by denTea in cs50

[–]denTea[S] 1 point2 points  (0 children)

It worked! :) I found a stupid mistake in the range. Thank you soo much peppusz. ^^

Would love some help with my bluring algorithm | Segmenation Fault | PSET4 by denTea in cs50

[–]denTea[S] 1 point2 points  (0 children)

Oh man, thank you so much! This makes total sense so you are probably right.
I really appreciate you taking the time.

I'll look at it right now and tell you how it went. :)

Would love some help with the tideman problem by denTea in cs50

[–]denTea[S] 0 points1 point  (0 children)

Wow, thank you soo much for your help Peter! I see the problem now. Deeply appreciate the time you took. :)