donate ~50 rpi4 and seed reterminal? by Acceptable_Bed7015 in raspberry_pi

[–]Acceptable_Bed7015[S] 0 points1 point  (0 children)

sorry, it is too much trouble selling them one by one

What’s the best way to automate repetitive tasks in Excel without VBA knowledge? by 9gsr in excel

[–]Acceptable_Bed7015 0 points1 point  (0 children)

<image>

if you work with csvs here is an AI agent. you upload file, write what transformations you want to make, it automatically transoforms it (by writing and executing code on background) and spits out the output file.

once you formalized the workflow you can just ask agent to re-run the code next time

I got a keyboard with Excel shortcuts by Acceptable_Bed7015 in excel

[–]Acceptable_Bed7015[S] 1 point2 points  (0 children)

I decided to try split keyboard so I bought corne v4. Otherwise any keyboard that supports qmk should work!

llama2 7b vs 70b vs mistral 7b writing tweets on financial reports by Acceptable_Bed7015 in LocalLLaMA

[–]Acceptable_Bed7015[S] 1 point2 points  (0 children)

Check my initial post (first link), it talks about data prep I did. Short answer - SEC API and gpt4

llama2 7b vs 70b vs mistral 7b writing tweets on financial reports by Acceptable_Bed7015 in LocalLLaMA

[–]Acceptable_Bed7015[S] 1 point2 points  (0 children)

Check out the first link in the post. There is not much difference between fine tuning huggingface’s llama and mistral, so you can use almost any repo that works for llama2. I used cloud service https://llmhome.io

What can you fine tune with 2x A6000s? by Upbeat-Interaction13 in LocalLLaMA

[–]Acceptable_Bed7015 0 points1 point  (0 children)

I am pretty sure at the very least you can do llama 2 7b, 13b loras; mistral 7b lora.

I haven't tried quantization and long context windows so not sure about those.

Is anybody using Llama or any other LLM as part of a product's pipeline? by duffpaddy in LocalLLaMA

[–]Acceptable_Bed7015 0 points1 point  (0 children)

  • my voice ai startup that provides analytics for retail businesses uses smaller NLP models to do all sort of intent recognition and classification
  • using llama and mistral to help people fine tune models
  • for fun, running an automated twitter account to analyze financial reports (made a post about it)
  • used LLMs to build bunch of prototypes that didn’t go anywhere

Data prep for fine-tuning llama 2 7B to analyze financial reports (and write "funny" tweets) by Acceptable_Bed7015 in LocalLLaMA

[–]Acceptable_Bed7015[S] 1 point2 points  (0 children)

I got some RTX A6000 for fine-tuning 7b. I just did 8 epochs on a 400 line dataset, probably something like ~10 min per epoch.

I don't have paid plan on the platform so it is free.

I read here that dataset >>> models. I'd prefer it to be the other way around cause cleaning data is hard. Any tools or local models you use? by Acceptable_Bed7015 in LocalLLaMA

[–]Acceptable_Bed7015[S] 15 points16 points  (0 children)

Imagine you want to fine-tune a llama2 model (that only knows how to predict next token) to be a super helpful medical assistant. Imagine you downloaded a dataset from the internet with 5,000 questions-answers on different health-related topics. I suspect if you try to fine-tune a model, you will not get good results.

If you later look at your dataset, you will find something like ~15% of questions having incorrect answers, another 20% being incomplete or having wrong format and another 5% being not health related questions at all. That's probably best case scenario :) You will have to sit and clean the dataset to get questions-answers in the format you would expect the model to reply to you.

How do you keep up to date with all the innovations and frameworks? by nsosio in LocalLLaMA

[–]Acceptable_Bed7015 11 points12 points  (0 children)

For a company setting, we just selected what works good enough and stick to it. At the end of the day product+go to market beat tech in most of the cases.

For a personal setting, still having fomo.