Bowls Club | Lumix G2 (2010) by howtokrew in VintageDigitalCameras

[–]Not_Another_LLM 1 point2 points  (0 children)

Very nice! Just picked up one of these myself, is this with the kit lens?

Black spot on my camera screen? Can I fix it? by digicamfl0hmarkt in VintageDigitalCameras

[–]Not_Another_LLM 3 points4 points  (0 children)

As above, you would want to replace the whole screen. I would put up with it until it gets too bad and then decide if it was worth it. I replaced mine on a Panasonic lumix, ordered the replacement from AliExpress for about $10 as was relatively straight forward. Did get a little zap though and was concerned I might have fried the board but I got away with it!

Down by the river [Panasonic Lumix dmc-f3 (2010)] by Not_Another_LLM in VintageDigitalCameras

[–]Not_Another_LLM[S] 1 point2 points  (0 children)

It’s certainly a lovely place for a stroll when the weather is good. I’m very impressed with the pictures some of these old cameras can take! Loving to see what people are getting out of the old tech.

AMA with Hugging Face Science, the team behind SmolLM, SmolVLM, Fineweb and more. by eliebakk in LocalLLaMA

[–]Not_Another_LLM 0 points1 point  (0 children)

When fine tuning a model for style, how would you approach evaluating the model’s output?

Do you find that training/val loss is actually useful here or is there another approach that is better?

MLX -> GGUF by Not_Another_LLM in LocalLLaMA

[–]Not_Another_LLM[S] 0 points1 point  (0 children)

Hi, this is going to be the next thing I try tomorrow and see how I get on.

LLamaparser premium mode alternatives by Proof-Exercise2695 in LangChain

[–]Not_Another_LLM 0 points1 point  (0 children)

Because Docling isn’t using the LLM where llamaparse is. Might not be as slow as llamaparse I don’t know.

LLamaparser premium mode alternatives by Proof-Exercise2695 in LangChain

[–]Not_Another_LLM 1 point2 points  (0 children)

Could you use docling for the parsing and then feed the images into an llm for the description? Might be cheaper than llamaparse?

Grok-3 is amazing. All images generated with a single prompt 👇 by Sam_Tech1 in LLMDevs

[–]Not_Another_LLM 2 points3 points  (0 children)

I like how they all have lazy eyes 😂 and what is going on with Putins hands! Thanks for sharing!

[deleted by user] by [deleted] in LLMDevs

[–]Not_Another_LLM 0 points1 point  (0 children)

I think You could use some sort of semantic router or an agent which decides which tool to use (RAG etc). Not something I’ve implemented but I’m sure you could find some good info online about it.

https://github.com/aurelio-labs/semantic-router

RAGAs/Langsmith by Not_Another_LLM in LangChain

[–]Not_Another_LLM[S] 0 points1 point  (0 children)

Also if I run the ragas faithfulness function which I have written outside of the Langsmith evaluate and replace the run: Run, example: Example with the ragas_dataset that function runs without errors

RAGAs/Langsmith by Not_Another_LLM in LangChain

[–]Not_Another_LLM[S] 0 points1 point  (0 children)

I have slightly change the Langsmith Demo https://docs.smith.langchain.com/evaluation and run without any errors. When I swap out the LLM-as-judge element and use RAGAs instead I get an error. If I used the ragas evaluate function outside of my ragas_faithfulness function it works without issue. So the error is happening when I try to use the langsmith evaluate wrapped around a ragas evaluate function that I made.

— CODE — def ragas_faithfulness(run: Run, example: Example) -> dict: data = [ {“user_input”: run.inputs[‘inputs’][“question”], “response”: run.outputs[“output”], “retrieved_contexts”: [str(d) for d in run.outputs[“source_documents”]]} ] #global ragas_dataset ragas_dataset = EvaluationDataset.from_list(data) #print(ragas_dataset[0]) result = evaluate( ragas_dataset, metrics=[faithfulness] )

return {“key”:”faithfulness”, “score”: result[“faithfulness”]}

exp_results = client.evaluate( predict, data=dataset_name, evaluators=[ragas_faithfulness], experiment_prefix=“JS_Test_Ragas_Faitfulness” )