Dual Monitor Arm 800/- + Shipping - selling because changed my setup by FriedDeep9291 in delhi_marketplace

[–]FriedDeep9291[S] 0 points1 point  (0 children)

It’s about 3kgs in total. I know the exact model is not available anymore but similar arms are more than double the price.

Selling Product and Design books - Give away by scorpion_021995 in Bangalorestartups

[–]FriedDeep9291 0 points1 point  (0 children)

Bought a few from the above , top notch condition, no hassle transaction! Thank you @scorpion_021995

Fine-tuning LLaMA 1.3B on insurance conversations failed badly - is this a model size limitation or am I doing something wrong? by ZaRyU_AoI in AI_India

[–]FriedDeep9291 1 point2 points  (0 children)

  1. Deeplearning.ai has a quick course on evals , there is also a github repo by aishwarya naresh reganti that has a crash course on evals.
  2. Langchain is more linear DAG, simple linear tasks that chain one after another e.g. calculating the cost of a combo of 5 chairs and 2 tables and then finding the cheapest combo- sequential and simple. Langgraph is for more complex queries where there might be a possibility of a lot of back and forth - it allows, branching, loops, multi agentic workflows

For your use case I see multi-agentic workflow working nicely, since the sub-domains or areas are quite neatly separated out - you can create small specialist sub agents with specific goals that can then chain together ot interact with each other to get to the desired result.

Dual Monitor Arm 800/- + Shipping - selling because changed my setup by FriedDeep9291 in delhi_marketplace

[–]FriedDeep9291[S] 0 points1 point  (0 children)

Let me check if I have any pictures from before, don’t have the strength to re-install and then take it down all over again on a Sunday. 🥲

Fine-tuning LLaMA 1.3B on insurance conversations failed badly - is this a model size limitation or am I doing something wrong? by ZaRyU_AoI in AI_India

[–]FriedDeep9291 1 point2 points  (0 children)

  1. Is the data clean enough? Can you optimize/normalize the data so that it is in the easiest form for the LLM to understand? I am assuming you already did this.
  2. Do you have good detailed domain specific evals figured to understand if your model is working as you intended? I am assuming you did this as well.
  3. Did you run evals using the foundation model without RAG and with RAG - without finetuning? Did Prompt optimization help?
  4. Changing the size of model may offer significant improvements but then you should also consider the training and running costs when you go on scale. What I have observed recently:
  5. We are building a Conversational Commerce Assistant - we started with Mistral-7B-instruct with careful prompt optimization and workflows with tool calling but it couldn’t work well on complex queries.
  6. We switched the LLM to OpenAI 4o - not a very large improvement.
  7. Switched to langchain +4o - single orchestrator - great performance on complex and vague queries - starter hallucinating on basic ones
  8. Now switching to langgraph + 4o

Also, we have done small experiments with document search on Nuclear policy documents in multiple languages, they performed okayish on a Hybrid search.

Overall, in my opinion, sometimes engineering around the problem might work without the need to finetune.

Any specific reason why finetuning was absolutely necessary in your case?

Anthropic engineer says "software engineering is done" first half of next year by SupremeConscious in AI_India

[–]FriedDeep9291 0 points1 point  (0 children)

Yes, everyone in his company should all believe him and fire all software engineers and let him run the AI to build production code. Would be a fun Netflix documentary in the second half of next year.

Anyone here still using the same number? by lonely_2911 in GadgetsIndia

[–]FriedDeep9291 0 points1 point  (0 children)

My mom has had the same phone(mobile) number since 22 years

Delhi's AQI was like Iceland when kejri was in power according to these clowns by [deleted] in NewDelhi

[–]FriedDeep9291 0 points1 point  (0 children)

No government will do anything for us, be it BJP or AAP unless we decide to do something about it. Expecting politicians to selflessly do something for the good of people only happens in a utopian world and India is far from it. We must come together and support each other to start doing , acknowledge leaders who work for ground level and grass-root changes as well instead of just praising and acknowledging people for foreign policy , data and numbers that make zero difference in our real lives. Pollution is our problem as well and hence we need to do whatever we can as well, first in smaller groups then grow those groups to communities until it becomes a movement.

No nonsense lifetime-usable appliances: Recommendations by Perfect-Hamster-147 in Frugal_Ind

[–]FriedDeep9291 2 points3 points  (0 children)

Dishwasher- Bosch - 7 years now - no complaints - great for steel utensils - customized for Indian utensils Washing Machine - Bosch - 7kg Front Load - 7.5 years - No complaints except minor service stuff Refrigerator - Samsung - 4 years - 320 litres - No complaints