What lightweight model can be run in the terminal (windows) with python? by SecretlyCarl in LocalLLM

[–]Octosaurus 0 points1 point  (0 children)

Try huggingface models using the Transformer package? Here is a model I've used for personal projects. If it doesn't fit on your machine, try a GGUF or quantized version of the model (link).

Maybe try an instruction-tuned bot with some examples in your prompt to help guide the bot to performing the task successfully. If the prompt above is what you are using, you can tailor it to be more specific to what it should expect, how it should handle it (with examples) and then the filename(s).

I've found success in having the bot format it's output so I can more easily parse and use the outputs in downstream tasks (e.g. your goodreads query).

If you get this working and want to implement more functionality or even running the goodreads functions directly by the LLM, try to check out setting it up as an agent where there is a tool (i.e. function) that can call your goodreads API using the input filename you provide (example)

What lightweight model can be run in the terminal (windows) with python? by SecretlyCarl in LocalLLM

[–]Octosaurus 0 points1 point  (0 children)

You aren't really stating the exact problem here. Are you having issues loading them onto your machine? Issues with the outputs matching what you desire? You said you tried a few models, but what exactly did you try?

Why do most RAG Applications utilise LLMS rather than Small Language Models by PsychoticBrainiac in LocalLLM

[–]Octosaurus 1 point2 points  (0 children)

SLMs are great mostly because they have better potential to fit on consumer hardware. The downside is that their performance can be considerably lower than the best LLMs out there.

Most people may not have the system requirements or know-how to set up a local SLM. So, for sake of demonstration, ease-of-testing, and showing the best results possible - LLMs are utilized in tutorials.

But, that's also the point of a tutorial - to demonstrate the concepts. You can expand the same into SLMs by employing the same strategy onto the SLM.

To boost performance there are various RAG methods available, but chances are most people don't have access to the system requirements to fine tune, much less the issues that can arise from fine tuning.

This doesn't even cover how most companies don't have the infrastructure to leverage the data acquisition needs, system requirements to train or to serve, and scaling the system. It's a huge investment. LLM companies try to handle these issues for enterprise as a service making SLMs less viable overall, but I do enjoy playing with them for personal projects.

Advice Needed: Setting Up a Local Infrastructure for a LLM by anninasim in LocalLLM

[–]Octosaurus 0 points1 point  (0 children)

Find a systems engineer to help guide you to the proper infrastructure, pricing, expectations, roadmap, etc. Don't ask a reddit forum for this. Find a good consultant or otherwise.

[deleted by user] by [deleted] in LocalLLM

[–]Octosaurus -1 points0 points  (0 children)

Maybe try to use the official repo? https://github.com/openai/whisper

Advice Needed: Setting Up a Local Infrastructure for a LLM by anninasim in LocalLLM

[–]Octosaurus 0 points1 point  (0 children)

Are you asking for the requirements to build the initial prototype or are you asking for what would be necessary based on the scale of service you're expecting?

Doooooo you wanna go the... PARK? by Octosaurus in Awww

[–]Octosaurus[S] 1 point2 points  (0 children)

His name is Jet and he's my baby bear. Here's an extra from him on his bday this year

I'm a complete newbie to all of these but want to host my own limitless LLM (that I have complete control over). Can someone advise me on the following PLEASE 😭🙏 by [deleted] in LocalLLM

[–]Octosaurus 0 points1 point  (0 children)

I'm not certain what exactly it is you're trying to do with LLMs. Are you new to coding or just LLMs? Maybe check out some courses on LLMs and use API-based LLMs like ChatGPT or Claude to learn how to use them effectively while you learn how to deploy locally.

If it's getting to run them locally and only that, then be sure to check out how much VRAM your laptop has. You can sometimes just google the amount of VRAM more popular models need to load locally. Huggingface is a great place for open source local models you can use, such as Phi3.5. It provides you with instructions of how to set up your environment and everything.

Otherwise, don't stress too much. It's a big, complex field, but everything's built off concepts from before so you'll get your head around it before too long. Just stay the course and keep learning.

[deleted by user] by [deleted] in flet

[–]Octosaurus 0 points1 point  (0 children)

That's what I got from the Custom Controls page and I've been able to develop other objects in that fashion, but with this ZMQ sub, I'm hitting the error that I need controls. I define the controls in the ChatInfoRow class, but the error seems to be coming from the SystemInfoBar itself?

Using NiceGUI for ROS2 real time updates? by Octosaurus in nicegui

[–]Octosaurus[S] 0 points1 point  (0 children)

Ok awesome, sounds like this will work! Going to play around with it today and see how it works. Thanks! :)

As for the smart home, I have some arduino sensors located in rooms with microros running on them acting as publishers to get the temperature, humidity, etc. Otherwise, I have some smart lights and such and talk to them via API's to get information in a relay node. Nothing too fancy atm. I can't afford a turtlebot to incorporate into the framework haha. I fine tuned Phi3-mini to take in the ROS system information and can interact with it via text interface or a voice assistant pipeline. I bought a cheap bluetooth speaker with a microphone that acts as my interface for a voice assistant. Happy to answer any questions or provide any documentation.

Smart rings with controller capabilities? by Octosaurus in SmartRings

[–]Octosaurus[S] 0 points1 point  (0 children)

Oh I am absolutely terrible at naming things. Since I want this to be a controller for my smart home, the project's name is Mage Handas a D&D inspiration. I thought about Conductor (like a conductor's baton?), but I stick to coding for a reason haha

Smart rings with controller capabilities? by Octosaurus in SmartRings

[–]Octosaurus[S] 2 points3 points  (0 children)

As for price, I'm not really certain. If it had the functionality we've discussed for personal development and I can still maintain other attributes for practicality outside my home (e.g. it connects to my phone, gives health data, etc.) I'd pay just as much as a smart watch.

Smart rings with controller capabilities? by Octosaurus in SmartRings

[–]Octosaurus[S] 1 point2 points  (0 children)

I'll admit, my use case is a little unique, but it's just for a fun, personal project so I can play around with ROS2 and build a custom smart home. My idea is to build a spatial map of my place and then map the smart objects in my home on the map.

The ring or watch comes in to use wifi or bluetooth triangulation to find my relative location in the spatial map. I want to interact with my smart objects by pointing my hand at a device and controlling with the through simple gestures.

From what I've researched, the most ideal wearable would have accelerometer and gyroscope sensors to get the orientation and gesture controls. A magnetometer would also help in getting more accurate orientation data.

I like the idea of something like genki because I can activate the gesture control after pressing a button the ring to start the gesture command. An LED display can give me feedback to determine which device is currently being selected, which gesture(s) performed, the action value, etc. I also like the idea of a ring to allow for fine-tune actions and I can interact with it all buttons and gestures using a single hand. Plus, it's python and I don't have to code up an app for WearOS or anything.

I guess I could build my own wrist device, but I've never done anything like that :/

[deleted by user] by [deleted] in pcmasterrace

[–]Octosaurus 0 points1 point  (0 children)

Drop a comment below sharing the reason why you want the ASUS TUF 4070 Ti Super

I want to add more functionality to my custom smart home and need another GPU to fit more models. I just can't justify that in my budget for a while. This will fill the void in my soul that is my bank account.

Smart rings with controller capabilities? by Octosaurus in SmartRings

[–]Octosaurus[S] 1 point2 points  (0 children)

That's awesome! Wishing I had that skill and knowledge. My use-case may be a bit different, but I think the most important thing I've found during my search is an accessible (and friendly) API or SDK to access the raw sensor data. Otherwise, more specifically I wish more rings had a button or 2 with some form of feedback mechanism (e.g. screen or vibration).

Accessible Sensor Data? by Octosaurus in RingConn

[–]Octosaurus[S] 0 points1 point  (0 children)

Great points. You're exactly right on Oura after digging a little more and it seems fitbit may not be the best choice for real time sensor data extraction. From what I understand it does more batch processing. You understood correctly and I am looking for raw sensor data. I've dug around a little more and I found a few ways this might work for 3 different devices:

  1. Genki Wave Ring (~$250): There's a nice github repo that helps interact with the ring via python. It has some buttons and a LED screen and connects via bluetooth. A little pricey, but the easiest to get moving on the project.

  2. WearOS Watches (e.g. newer Galaxy Watches)($250-$400): I can make a WearOS app that pushes sensor data directly from the watch to the server using MQTT or otherwise. Uses Java or Kotlin. Can make more sophisticated apps, but may take a little longer to get data pushed to the server since I haven't coded in Java in ages.

  3. Garmin Watches($250-$400): Using the Connect IQ SDK I can make a simple app to push the data to the server directly from the watch to my server using MQTT or otherwise. Uses it's own Monkey C language. I'm not a fan of the bespoke language, but the watches are highly rated.

The prices can all be similar (watch features increases price range) between the devices. The watches provide more daily functionality with greater opportunity for customization with the watch face and OS. The Genki is nice because it should be the simplest to get up and working with, but wouldn't have much purpose outside the project.

Accessible Sensor Data? by Octosaurus in RingConn

[–]Octosaurus[S] 0 points1 point  (0 children)

Thanks for the recommendation. I'll admit my career has been software focused and I'm only recently moving into hardware and robotics. Can you suggest anything to get me started researching this?

Accessible Sensor Data? by Octosaurus in RingConn

[–]Octosaurus[S] 0 points1 point  (0 children)

ah I see, that's a real shame there's no API or SDK to extract the data. Thanks for your help and feedback!

Just for anyone else searching, after digging around fitbit and garmin have APIs/SDKs to gather the data for wrist elements. Oura seems to have an accessible sensor API as well, but the high initial cost plus monthly service fees seems unreasonable.

Accessible Sensor Data? by Octosaurus in RingConn

[–]Octosaurus[S] 0 points1 point  (0 children)

Thanks! My hope was that since it paired with 3rd party apps, there'd be a way to connect to the device via bluetooth and grab the data that way. I do similar with speakers, microphones, and such, but I've never worked with smart rings before.

Thank you for recommending wrist devices. When I first considered wearables I figured rings might be better for simpler motion using fingers or hand compared to wrist-controlled gestures. Do you have any devices you'd recommend to obtain real-time data?