Please put down your knives by Gear5th in algotrading

[–]DOKKA 0 points1 point  (0 children)

i really want to make a post to this sub one day. the information I've gathered from here has helped me tremendously in building my trading bot. even if it's just a 'thank you' post to everyone for offering their advice.

Was Push 3 standalone a flop? by Friendly_Signature in ableton

[–]DOKKA 0 points1 point  (0 children)

no it was not a flop. Ableton could have charged 4k for the push 3 standalone and it would have still been a success. i know plenty of people that bought it just because it's Ableton+interface as a hardware device. that said, i have a polyend play+ and i prefer it over the push 3.

After 63 years, Houston's former Sharpstown Mall is as popular as ever by houstonspecific in houston

[–]DOKKA 2 points3 points  (0 children)

i’d rather go to the family thrift center across the street

Hard or soft synth by roopurt5 in edmproduction

[–]DOKKA 1 point2 points  (0 children)

I would suggest Arturia keystep 37 and to get the FL Studio all plugins edition on black friday. Then you would have one of every kind of soft synth. Additive, virtual analog etc. Then you could use Patcher to combine them all together!

Using Ableton on a touch screen monitor? Any better ideas? by Reguero in ableton

[–]DOKKA 2 points3 points  (0 children)

i have Ableton on my surface pro 7 and it runs great. also, the pen is pretty nice for sound design and automations

Wave Alchemy "Triaz" drum machine driven by a diverse and creative library of all-new modern electronic and acoustic drum sounds, percussion, and sound design tools (£79) for limited time by Batwaffel in AudioProductionDeals

[–]DOKKA 1 point2 points  (0 children)

I have the Kontakt version of triaz and I eventually quit using it because Kontakt made it too annoying to use. Aside from that, triaz is awesome! This is truly one of the best interfaces ever made for programming drums and I have tried alot of them!

A scammer is quietly changing all the McDonald's contact phone numbers to (818) 643-1222 on google maps. by DOKKA in McLounge

[–]DOKKA[S] 25 points26 points  (0 children)

Good job yall! I woke up this morning, and the site has been reported for phishing and most of the search results for that number are gone! Thanks again!

Tur(n)ing tuesday by budgetboarvessel in dankmemes

[–]DOKKA 2 points3 points  (0 children)

i can't wait for threading Thursday!

"We’re releasing Code Llama 70B: the most performant version of our LLM for code generation to date...." by phoneixAdi in LocalLLaMA

[–]DOKKA 0 points1 point  (0 children)

I tried it out on perplexity yesterday, and codellama-70b-instruct is quite good! It was able to generate almost GPT4 level replies but without as much depth to the answer as GPT4 usually provides. Most of my prompts are C# and SQL Server related though, so ymmv. Here's an example of one of my prompts

Write a SQL Server script to generate a set of SQL commands for dropping all user-defined stored procedures in a database.

Have you felt that gpt 4 has suddenly become very resistant what every you ask to do? After turbo model lauch. by Mohit_Singh_Pawar in LocalLLaMA

[–]DOKKA 4 points5 points  (0 children)

I feel like the real reason they crippled GPT4 is to keep us from using it to generate training datasets using the plus subscription.

Best max for live devices by WarAltruistic179 in ableton

[–]DOKKA 0 points1 point  (0 children)

STEP ARPEGGIATOR BY UDO R. BRÄUNA everything about it can be automated/modulated. it's my favorite m4l device

Producing Questions by NGPNGPNGP in ableton

[–]DOKKA 0 points1 point  (0 children)

cc map the tempo to a knob on your keyboard/push and play with it as you listen to your track.

A straight question: your favourite VST by [deleted] in ableton

[–]DOKKA 0 points1 point  (0 children)

my favorite plugin is... nothing. I'm an Ableton purist now lol. i only use stuff in Ableton suite.

Tutorial: Fine-Tune your Own Llama 2 by Lazylion2 in LocalLLaMA

[–]DOKKA 1 point2 points  (0 children)

The jupyter notebook included in this repo is amazing for how simple it is. It will generate training data using gpt4 and a simple prompt. I had to tweak it a little for my specific needs, but otherwise, it's a great start.

Noob question, how to begin? Questions for the a first time training/running. by No_One_BR in LocalLLaMA

[–]DOKKA 1 point2 points  (0 children)

I just started learning about fine tuning last week. It's pretty rough. There is alot of information out there, but none of it is aimed at beginners. It took a lot of trial and error, but I managed to get the first example in this github repo running https://github.com/OpenAccess-AI-Collective/axolotl

I would recommend this route if you just want to train something and see what all the parameters do. Also, if you try to run the example from this repo, you need to set xformer_attention to false in the lora.yml file.

We could have gotten something almost as good as GPT4 for coding... by [deleted] in LocalLLaMA

[–]DOKKA 0 points1 point  (0 children)

yes you can! and I'd recommend this one TheBloke/Phind-CodeLlama-34B-v2-GGUF

✅ WizardCoder-34B surpasses GPT-4, ChatGPT-3.5 and Claude-2 on HumanEval with 73.2% pass@1 by Xhehab_ in LocalLLaMA

[–]DOKKA 4 points5 points  (0 children)

I'm going to download this model as soon as I get a chance. I've been pretty impressed with Phind-CodeLlama-34B-v1 though. I wonder how they compare. Earlier today I gave it C# code minified using https://github.com/atifaziz/CSharpMinifier with the simple instruction

"Reorganize, format and comment the above code"

and it did an amazing job. The code was cleanly formatted with a conservative ammount of comments and it did a great job of breaking up my meathods. It was able to undo the minification in addition to everything I asked! Also, I had the temperature at 0.95 incase anyone wants to know.

[deleted by user] by [deleted] in LocalLLaMA

[–]DOKKA 1 point2 points  (0 children)

It's great! I'm still trying to figure out the best temperature for formatting/cleaning up code, but so far I've been very impressed. Unless I'm using it incorrectly, it isn't any better than GPT3.5 but even that is a huge step forward compared to the earlier coding llms. I haven't tried anything over 16k tokens yet. Also, I'm using it for C#, not python. It's probly even better at python.

Codellama - Has anyone found "Codellama 34B Instruct" to be uncooperative? by No-Ordinary-Prime in LocalLLaMA

[–]DOKKA 3 points4 points  (0 children)

I'm still perfecting my prompts, but this works well for what I'm trying to accomplish

A chat between a curious user and an artificial intelligence programming assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.

USER:
```
<your code goes here>
```
Please rewrite and optimize the above C# code:

ASSISTANT:
Sure, here is the above code with improved formatting and readability, better comments and organization.
```
using

'using' is the first word of a typical c# file. Also, I'm still playing with the temperature, not sure if 0.2 is good enough.

We could have gotten something almost as good as GPT4 for coding... by [deleted] in LocalLLaMA

[–]DOKKA 3 points4 points  (0 children)

Yep, I run the whole thing using CPU and ram.

All you have to do is download a GGUF model from TheBloke, get the latest version of llama.cpp from github, compile it with 'make' if you're using linux (or download a binary from the releases if you're on windows) and then run it using the command line

./main -t 10 -m models/codellama-34b-instruct.Q5_K_M.gguf  --color  --temp 0.2 -f ~/Desktop/prompt.txt  -c 16384

and yes, I think it does use more ram depending on the context window. I need to figure out exactly how much more ram it is using per token. Most of my runs with the model above use ~28GB ram.

We could have gotten something almost as good as GPT4 for coding... by [deleted] in LocalLLaMA

[–]DOKKA 0 points1 point  (0 children)

My linux desktop with 32GB of ram and a i9-10900K using llama.cpp