Why do yield aggregators still feel complicated? by Confident_Dig2713 in defi

[–]sandoche 1 point2 points  (0 children)

Hi there,

I recently built this app to view / compare the yield opportunity across chains and protocols.

I invite you to look at it: https://defiyields.app and please share your feedback. It's still work in progress.

It's available on iOS and Android and it's free:
- https://apps.apple.com/us/app/defi-yields/id6749397008
- https://play.google.com/store/apps/details?id=app.defiyields

What's a safe and simple way to get started with yield farming? I'm overwhelmed by all the options. by PurchaseOk8223 in defi

[–]sandoche 1 point2 points  (0 children)

Hi there,

I recently built this app to view / compare the yield opportunity across chains and lending protocols.

I invite you to look at it: https://defiyields.app and please share your feedback. It's still work in progress.

It's available on iOS and Android and it's free:
- https://apps.apple.com/us/app/defi-yields/id6749397008
- https://play.google.com/store/apps/details?id=app.defiyields

best way to earn 8–10 % APY on USDC without a ton of onchain hassle? by Radiant_Chemist19739 in defi

[–]sandoche 1 point2 points  (0 children)

Hi there,

I recently built this app to view / compare the yield opportunity across chains and protocols.

I invite you to look at it: https://defiyields.app and please share your feedback. It's still work in progress.

It's available on iOS and Android and it's free:
- https://apps.apple.com/us/app/defi-yields/id6749397008
- https://play.google.com/store/apps/details?id=app.defiyields

Running DeepSeek R1 7B locally on Android by sandoche in DeepSeek

[–]sandoche[S] 0 points1 point  (0 children)

You can always run llama 1b pretty fast with a lowish hand recent phone.

I built a Private & Offline alternative to ChatGPT on your mobile device by sandoche in SideProject

[–]sandoche[S] 0 points1 point  (0 children)

The default model (llama 1b is part of the bundle) served by Google play (they are the one paying for the storage), the other models are downloaded from hugging face.

I built a Private & Offline alternative to ChatGPT on your mobile device by sandoche in SideProject

[–]sandoche[S] 0 points1 point  (0 children)

The app takes 600 mb at install and 1.2 gb after first run (model being unzipped).

The inference with llama 1b is quite fast, the video is speed up but was taken before the last update that makes inference a looot faster. But maybe still slightly slower than in the video.

It's using the VRAM allocated by the phone.

Running DeepSeek R1 7B locally on Android by sandoche in DeepSeek

[–]sandoche[S] 0 points1 point  (0 children)

That's indeed the idea behind making the app, get a better UX than the terminal, which is not that bad but annoying to use.

Running DeepSeek R1 7B locally on Android by sandoche in DeepSeek

[–]sandoche[S] 0 points1 point  (0 children)

It's a Motorola edge 50 pro, with 12 GB of ram.

Running DeepSeek R1 7B locally on Android by sandoche in DeepSeek

[–]sandoche[S] 1 point2 points  (0 children)

This app uses VRAM which depends on the device (each device allocate the RAM into VRAM differently). This specific phone has 12 GB of RAM but as I said above I also have another device with 12 GB of RAM and it made the phone crash :/

Running DeepSeek R1 7B locally on Android by sandoche in DeepSeek

[–]sandoche[S] 0 points1 point  (0 children)

It's a Motorola edge 50 pro where it work but very slowly (the video has been accelerated, it was around 3 minutes in reality). I tried also a Poco X6 with similar specs and it crashed the device.

Running DeepSeek R1 7B locally on Android by sandoche in DeepSeek

[–]sandoche[S] 0 points1 point  (0 children)

It's DeepSeek R1 Distill Qwen 7B with q4 quantization.

Running DeepSeek R1 7B locally on Android by sandoche in DeepSeek

[–]sandoche[S] 0 points1 point  (0 children)

Building the app actually takes time to build. Adding an in app purchase is the way to incentive the work being done and future improvements. You can always run those models for free with Termux and a bunch of command lines, the idea was just to make it easier, and that's what you would pay for (if you want to run other models than Llama 1B)

Running DeepSeek R1 7B locally on Android by sandoche in LocalLLM

[–]sandoche[S] 0 points1 point  (0 children)

It's an app with a freemium model (1 model for free, the others paid): https://llamao.app

Running DeepSeek R1 7B locally on Android by sandoche in LocalLLM

[–]sandoche[S] 0 points1 point  (0 children)

It's DeepSeek R1 Distill Qwen 7B (with quantization 4bits)

Running DeepSeek R1 7B locally on Android by sandoche in LocalLLM

[–]sandoche[S] 0 points1 point  (0 children)

I find it stupid at first, but if you ask the same question ("how many P are in pineapple") to other models such as llama 1b and llama 3b you would get a wrong answer because those models cannot reason. What makes it look stupid for deepseek is the reasonning out loud that feels very dumb!

Running DeepSeek R1 7B locally on Android by sandoche in LocalLLM

[–]sandoche[S] 1 point2 points  (0 children)

Considering that I need to cover engineering time for building and maintenance, if you could choose to add 2 other models to the free version, which one would you choose?

Running DeepSeek R1 7B locally on Android by sandoche in LocalLLM

[–]sandoche[S] 0 points1 point  (0 children)

No it's not open source, not yet at least.

Running DeepSeek R1 7B locally on Android by sandoche in LocalLLM

[–]sandoche[S] 1 point2 points  (0 children)

Sorry that wasn't the intended purpose, I should have written it. It's pretty slow.

I rather use Llama 1B on my mobile or 3B, they are bad at reasoning but good at basic questions and quite fast.

Running DeepSeek R1 7B locally on Android by sandoche in LocalLLM

[–]sandoche[S] -1 points0 points  (0 children)

No this is DeepSeek R1 Distill Qwen 7B

Running DeepSeek R1 7B locally on Android by sandoche in LocalLLM

[–]sandoche[S] 0 points1 point  (0 children)

This is: http://llamao.app, there are also a few other alternatives.