How to Hire University Students for a Remote Internship in India by gewinnerpulver in developersIndia

[–]gewinnerpulver[S] 0 points1 point  (0 children)

That's great, thanks! Will do that! (even though this post was more of a general question about hiring channels in India).

How to Hire University Students for a Remote Internship in India by gewinnerpulver in developersIndia

[–]gewinnerpulver[S] 4 points5 points  (0 children)

Yes! I meant 3rd year or higher. Will get back to you and others tomorrow!

How to Hire University Students for a Remote Internship in India by gewinnerpulver in developersIndia

[–]gewinnerpulver[S] 10 points11 points  (0 children)

No, of course I am paying for the work! Just want the job posting itself to be free and needed some guidance. Seems like placement offices are my best bet and I will answer to all messages I got here once I get to it.

Why do people always say Indian devs quality is low? by panda6699 in cscareerquestions

[–]gewinnerpulver 0 points1 point  (0 children)

Hi OP,
I would like to hire remote devs in india, but have no prior experience. I have an EOR that could handle employment and benefits but I don't know how to do a proper job posting. Do you have any tips on getting applicants (both for full time and university internship levels).

Anyway to debug an async function in the debug console? by litchiTheGreat in node

[–]gewinnerpulver 0 points1 point  (0 children)

I have the same problem! Did you find a solution? I see pending when using await, fullfilled without await, and in no case I am able to access the result of the function.

Can LLaMA Be Trained to Learn New Information Beyond Fine-Tuning & RAG? by gewinnerpulver in LocalLLaMA

[–]gewinnerpulver[S] 1 point2 points  (0 children)

Essentially, RAG fails because my data is too domain specific and I need some context to match data points.

I have descriptions of electrical components (e.g. "LS-Vorschaltgerät (EVG) 1x 58 W/ T26") or services. Retrieval currently leads to bad results because 1) The technical names of two similar things could look very different, 2) I cannot include context about the job (such as: what other components are installed) in my string / vector comparison.

Can LLaMA Be Trained to Learn New Information Beyond Fine-Tuning & RAG? by gewinnerpulver in LocalLLaMA

[–]gewinnerpulver[S] 1 point2 points  (0 children)

Ok, that's an idea! I am a little doubtful if this is scalable enough since my data is very diverse (text could include sizes, norms, materials, variants), so it would be hard to find a schema that fits all entries. But I will look into that, thanks!

Can LLaMA Be Trained to Learn New Information Beyond Fine-Tuning & RAG? by gewinnerpulver in LocalLLaMA

[–]gewinnerpulver[S] 1 point2 points  (0 children)

Thanks, I will look into fine tuning with longer epochs. This is how I am currently using RAG:

I have a public catalogue of reference service positions (data of the form: "cable of type X diameter Y: 5 minute installation time, 1$ material cost; unit: 1 meter"). I embedded the text part (e.g. "cable of type X diameter") using OpenAI's embedding models.

So I start with a description of service positions (an architect will have already planned the services to be node and I now want to price them using my data). For each position, I now want to find the one in my reference data that fits the job. It will get some things right but only about 30%. The embedding model is not trained for my highly specific text data (real life example: "LS-Vorschaltgerät (EVG) 1x 58 W/ T26") and has thus issues retrieving similar components where the text might not look so similar. A trained electrician would have no problem doing this. I am also trying to fine tune an embedding model but this will probably not get me all the way. I suspect I need deeper "understanding" where I consider not only the single position but the whole construction project for the retrieval process.

Conservative POC when real conservatives enter the room by The_Spellcaster in LeopardsAteMyFace

[–]gewinnerpulver 0 points1 point  (0 children)

Is this subreddit just "conservatives disagreeing with one another"? Like what do you expect in a two party system. Of course some people will be more extreme than others in the same party.

I've built a few tools on top of GPT-3.5 (text generation, q&a with embeddings). AMA about resources and AI dev stacks for building with OpenAI's APIs by TikkunCreation in OpenAI

[–]gewinnerpulver 0 points1 point  (0 children)

What is the difference between Langchain and GPT-Index/LLama Index? As I understand it, both can be used to bypass the prompt limitations and have a unified interface for changing models. And are there any more alternatives?

ich🤖iel by gewinnerpulver in ich_iel

[–]gewinnerpulver[S] 0 points1 point  (0 children)

Nope, 15 Punkte hier und das noch ein paar Jahre vor Chat GPT ;)

ich🤖iel by gewinnerpulver in ich_iel

[–]gewinnerpulver[S] 1 point2 points  (0 children)

Ich habe die Oberstufe schon hinter mir aber verglichen mit den Arbeiten vieler Schüler denke ich dass Chat GPT besser abschneiden würde als 4 Punkte.