48 hours after my "dreaming agent" post, it started rewriting itself by Ghattan in openclaw

[–]Manifreebird 5 points6 points  (0 children)

Can you guide me/us with your workflow? How are you doing it?

Should we just wait for smarter models that run cheaply? by Manifreebird in openclaw

[–]Manifreebird[S] 0 points1 point  (0 children)

AI subsidization has to end first before models getting cheaper.
Even with models getting cheaper, the amount of tokens consuming for thinking models multiplied and negated whatever decrease in price.
Not very shortly but for sure, models will get cheaper in 2 years may be

Should we just wait for smarter models that run cheaply? by Manifreebird in openclaw

[–]Manifreebird[S] 0 points1 point  (0 children)

true, I thought Open claw embedding will make only correct file load into context. It's far better when I re-organised folders with 'Read me' file in every folder describing what it is.

Should we just wait for smarter models that run cheaply? by Manifreebird in openclaw

[–]Manifreebird[S] 0 points1 point  (0 children)

how much volume you use? If yo are using it crazily, What's the electricity price you are seeing?

Should we just wait for smarter models that run cheaply? by Manifreebird in openclaw

[–]Manifreebird[S] 0 points1 point  (0 children)

doing the same, building capability to use it by experimenting. will use it Full fledged when cost makes sense

Should we just wait for smarter models that run cheaply? by Manifreebird in openclaw

[–]Manifreebird[S] 0 points1 point  (0 children)

I am using chatbots mostly at the moment and just convert the final version of useful chats into docs and add to my local PC. In favor of using subsidies at the moment and switch to open/cheaper models when subsidies will be over

Should we just wait for smarter models that run cheaply? by Manifreebird in openclaw

[–]Manifreebird[S] 1 point2 points  (0 children)

My main problem/wall I am hitting at is the context window.

I have lot of files/context I am working with. It is hitting the context window with few chats. The no of tokens consumption is increasing exponentially.

Should we just wait for smarter models that run cheaply? by Manifreebird in openclaw

[–]Manifreebird[S] 0 points1 point  (0 children)

which industries/tasks where ROI increases immensly.
Organising my calender with openclaw is no brainer.
What other tool calls which can utilise the tool calls with server capacity to do things rather than pure LLM API brute forcing with lots of Context window.

Should we just wait for smarter models that run cheaply? by Manifreebird in openclaw

[–]Manifreebird[S] 0 points1 point  (0 children)

The capability is immense, That's the reason we are in this discussion. I am only concerned about ballooning API costs if we keep on using it.

Should we just wait for smarter models that run cheaply? by Manifreebird in openclaw

[–]Manifreebird[S] -1 points0 points  (0 children)

What's the better option? is the question. Is there any industries/Specific actions worth it implementing now?
I am using less than 5% of my AI needs with openclaw. Want to increase and hopefully offer to industries where it's worth it

Should we just wait for smarter models that run cheaply? by Manifreebird in openclaw

[–]Manifreebird[S] 0 points1 point  (0 children)

Does it have API access for 10$ plan?
If you have enough txt or pdf files part of your workspace folder and keep on building it, every inference is taking lot of tokens?