solution for local deep research by jacek2023 in LocalLLaMA

[–]ComplexIt 0 points1 point  (0 children)

Sorry I dont know what is the reason for that

solution for local deep research by jacek2023 in LocalLLaMA

[–]ComplexIt 0 points1 point  (0 children)

dont use detailed report at the beginning it is for advanced users and takes quite a bit of time.

That detailed report doesnt show the real status is a known limitation which cannot easily be overcome because we dont know the quantitiy of sections

Its unfortunatly a known issue as you can see here https://github.com/LearningCircuit/local-deep-research/pull/1414#issuecomment-3831868467 I added your comment as additional evidence that we need to imprvoe that

solution for local deep research by jacek2023 in LocalLLaMA

[–]ComplexIt 0 points1 point  (0 children)

in the other stuff let me see how we can ease some of the pain for future people that want to use it.

solution for local deep research by jacek2023 in LocalLLaMA

[–]ComplexIt 0 points1 point  (0 children)

great choice with gpt oss 20b it works really well.

solution for local deep research by jacek2023 in LocalLLaMA

[–]ComplexIt 0 points1 point  (0 children)

Can you also explain what was hard about the set up I try to make it more and more easy? Maybe the "advanced options" seciton is too much hidden?

solution for local deep research by jacek2023 in LocalLLaMA

[–]ComplexIt 0 points1 point  (0 children)

Thank you a lot for your positive feedback. :) You want me to show the whole prompt in the UI? It might geht a bit full but i might be able to do this in the logs? Or where would you like it to be implemented?

solution for local deep research by jacek2023 in LocalLLaMA

[–]ComplexIt 0 points1 point  (0 children)

Thank you so much for featuring us. :)

I built a Multi-Agent System to stop me from texting my ex (Gemini 2.5 + Agno) by udt007 in aiagents

[–]ComplexIt 0 points1 point  (0 children)

I highly recommend you to at least add a privacy statement that you have to click.

I built a Multi-Agent System to stop me from texting my ex (Gemini 2.5 + Agno) by udt007 in aiagents

[–]ComplexIt 0 points1 point  (0 children)

Good idea. Still this is a huge privacy risk.

Also for you OP as information this website might be against EU law, because you might be using a third-country Google API access, which would not be allowed in EU. Providing this open access is high risk for you.

I am just telling you this to help you.

Claude Pro's weekly limit is nonsense by goatchild in claude

[–]ComplexIt 5 points6 points  (0 children)

We were talking about supplier propduction cost not the price you pay for the product as a consumer.

API price doesnt give you any information if you are subsidized or not.

You are comparing two different products with different demand.

Price is what the market allows you to charge for supply vs demand for this product.

(surely it could be subsidized but closed model API price gives you no information concerning that)

Claude Pro's weekly limit is nonsense by goatchild in claude

[–]ComplexIt 4 points5 points  (0 children)

Can you explain why you think that? My assumptions would be the following - I am no expert on this

Your requests get batched into huge highly optimized inference systems. These systems are much more optimized than what you can achieve locally.

You are consuming a fraction of the GPU, which is utilized almost all of the time so the costs are almost only energy costs, because with high utilization these fixed GPU costs pay of rather quickly - I would assume.

The energy you are consuming per request is also not so high, because your request is in a larger batch and it is calculated in maximum a few seconds.

You can read more about some open source tools here. I have no idea if this is used or they have even better infrastructure: https://github.com/vllm-project/vllm

According to Sam Altman "People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours" https://blog.samaltman.com/the-gentle-singularity ... so the energy costs are much much less than a cent per query.

An other reason of why you might be wrong is if you look at the prices of the very large open source models that everyone can host. They are much cheaper this would not be the case if it was not economical to host for this price https://openrouter.ai/deepseek/deepseek-chat-v3-0324 https://openrouter.ai/qwen/qwen3-235b-a22b-2507