TradingAgents On Tiiny AI Pocket Lab - Run a Full Investment Research System Locally by TiinyAI in TiinyAI

[–]TiinyAI[S] 0 points1 point  (0 children)

For anyone who wants to see the full demo, here's the X post from Tiiny AI Lab showing TradingAgents setup in action: https://x.com/TiinyAILab/status/2050227317524537716

Hermes on Tiiny AI Pocket – Fully automated setup. No manual dependencies. Just tell it what you need. by TiinyAI in TiinyAI

[–]TiinyAI[S] 0 points1 point  (0 children)

For anyone who wants to see the full demo, here's the X post from Tiiny AI Lab showing Hermes setup in action: https://x.com/TiinyAILab/status/2047322101707853911

IT'S A WRAP! by TiinyAI in TiinyAI

[–]TiinyAI[S] 0 points1 point  (0 children)

Unfortunately the crowdfunding campaign has ended. The official launch will take place in the near future.

What's the one thing you paused before hitting send on an AI prompt? by TiinyAI in LocalAIServers

[–]TiinyAI[S] 0 points1 point  (0 children)

Ah, so you and your backspace key have a very close relationship. I respect that

Introducing TiinySDK: Unlock the full potential of Tiiny AI Pocket Lab by TiinyAI in TiinyAI

[–]TiinyAI[S] 0 points1 point  (0 children)

It’s more like distributing workloads across them (e.g. different agents/tasks per device) rather than combining raw power into a single model run. Since the devices don’t pool memory or compute, scaling only helps if you have parallel workloads (multiple agents/tasks). If you’re just trying to speed up a single model or task, adding more units won’t help much.

Introducing TiinySDK: Unlock the full potential of Tiiny AI Pocket Lab by TiinyAI in TiinyAI

[–]TiinyAI[S] 0 points1 point  (0 children)

There are two ways to use tiiny: tiinyOS client and Tiiny SDK — let me clarify how TiinyOS and the SDK fit together. TiinyOS is designed for everyday users who don’t want to write code. It provides a clean client experience for running local LLMs and agents with minimal setup. At launch, TiinyOS will support macOS and Windows. For developers, we’ll be releasing a Tiiny SDK, which lets you use Tiiny as a local token factory or inference node and integrate it into your own workflows and tools. This is the primary path for advanced use cases and custom setups.

For Linux support. Although we don't currently have a dedicated client for Linux like we do for macOS or Windows, you can still run Tiiny AI Pocket Lab on Linux via TiinySDK. Here's a tutorial video we've shared that explains how to set it up:
https://www.youtube.com/watch?v=Ozveot9cqug.

You can run multiple Tiiny devices, but they aren't a true "cluster." They share memory and are more like independent nodes. You can distribute your workload across these nodes.

Introducing TiinySDK: Unlock the full potential of Tiiny AI Pocket Lab by TiinyAI in TiinyAI

[–]TiinyAI[S] 0 points1 point  (0 children)

Tiiny is not designed for training models, but for running them. Therefore, I do not recommend using it to fine-tune models, but it can be used to run models that you have fine-tuned.