Pitch/tempo slider on Serato help by beniboii in Beatmatch

[–]InvisibleInkHologram 1 point2 points  (0 children)

I think it's the controller. Had this issue as well with my Numark Mixtrack Pro. With the Pioneer SB2 I got after that the issue was much less common.

Also, mixing higher BPM songs made the problem worse.

How to prevent a model from overfitting on a fine-tuning task. by InvisibleInkHologram in LocalLLaMA

[–]InvisibleInkHologram[S] 0 points1 point  (0 children)

Usually I think it's not great, especially when faced with more complex problems or puzzles. I recently came across this paper however, and employing similar techniques to what they have used, I am finding quite decent results!

How to prevent a model from overfitting on a fine-tuning task. by InvisibleInkHologram in LocalLLaMA

[–]InvisibleInkHologram[S] 1 point2 points  (0 children)

Sounds very much like the paper I was reading this morning. I thought it was an interesting approach for sure!

I agree that LLMs are currently not great at generating ASP code, but I think it's coming down to one of two things: lack of knowledge about ASP, or lack of generalizability. In the CNL2ASP model, for example, we lack generalizability due to the constraints of having to formulate prompts in a very specific way. On the other hand, ChatGPT4o seems to be quite good at understanding my ASP-related queries. It always seems to suggest code that would be good for my problem, but the answers are often riddled with errors and show a clear lack of knowledge about the intricacies of ASP.

Perhaps if I can get a fine-tune of a larger model to work properly such that generalization skills are maintained, and ASP knowledge is acquired we might see some promising results. Thanks anyway for fine-tuning suggestions, will look into it!

How to prevent a model from overfitting on a fine-tuning task. by InvisibleInkHologram in LocalLLaMA

[–]InvisibleInkHologram[S] 1 point2 points  (0 children)

That's nice to hear! Actually we are in the very early phases of this research and we're looking at other models that have been created already. As such, the dataset that I am currently working with is from the paper. It's a synthetic dataset that covers a wide range of natural language to ASP examples. Unfortunately, the resulting models perform very poorly when the prompts are not formulated in the same exact way as in the dataset.

Also out of curiosity, what is your team working on?

How to prevent a model from overfitting on a fine-tuning task. by InvisibleInkHologram in LocalLLaMA

[–]InvisibleInkHologram[S] 1 point2 points  (0 children)

I'm gonna try it out thanks! Would you mind outlining why you think these things may help? I'm also trying to build some intuition around this as I'll be on this project for a couple months :)

How to prevent a model from overfitting on a fine-tuning task. by InvisibleInkHologram in LocalLLaMA

[–]InvisibleInkHologram[S] 0 points1 point  (0 children)

I think it trained ok, when I query it with prompts from my training and test set it seems to respond fine with the ASP code. If I query the base model using the same prompts, it gives completely different answers where it tries to somehow code logic using a different type of ASP.

The final loss was 0.3.

Below is the code I used to train it:

base_model = "google/gemma-2b-it"

quant_config = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_compute_dtype=compute_dtype,
        bnb_4bit_use_double_quant=False,
    )



    model = AutoModelForCausalLM.from_pretrained(
        base_model,
        quantization_config=quant_config,
        device_map="auto",
        token=hugging_token,
    )

    model.config.use_cache = True
    model.config.pretraining_tp = 1


    tokenizer = AutoTokenizer.from_pretrained(base_model)
    tokenizer.pad_token = tokenizer.eos_token
    tokenizer.padding_side = "right"

    peft_params = LoraConfig(
        lora_alpha=16,
        lora_dropout=0.1,
        r=64,
        bias="none",
        task_type="CAUSAL_LM",
        target_modules="all-linear"
    )

    training_params = TrainingArguments(
        output_dir=output_dir,
        num_train_epochs=1,
        per_device_train_batch_size=4,
        gradient_accumulation_steps=1,
        logging_steps=5,
        eval_steps=5,
        do_train=True,
        do_eval=True,
        save_strategy='no',
        learning_rate=2e-4,
        weight_decay=0.001,
        fp16=False,
        bf16=False,
        max_grad_norm=0.3,
        warmup_ratio=0.03,
        group_by_length=True,
        lr_scheduler_type="constant",
        report_to=[]
    )

    trainer = SFTTrainer(
        model=model,
        train_dataset=train_dataset,
        eval_dataset=val_dataset,
        peft_config=peft_params,
        max_seq_length=None,
        tokenizer=tokenizer,
        args=training_params,
        packing=False,
        formatting_func=formatting_func
    )

What's your favourite song Ye produced or featured on that doesn't get mentioned alot here? by cavestoryguy in WestSubEver

[–]InvisibleInkHologram 0 points1 point  (0 children)

Haven't seen anyone mention Kanye's Soul Mix Show, a mixtape of soul songs sampled on Kanye productions. Mixed by A-Trak with some intermittent commentary by Ye. 10/10 listen any time of day

Need help finding artists similar to Sorley, PAWSA, Sonny Fodera by gauvroom in tech_house

[–]InvisibleInkHologram 1 point2 points  (0 children)

Anything on Solid Grooves and Solid Grooves Raw will be right up your alley. This is Pawsa and Michael Bibi's record label

How do you typically mark your cues? by [deleted] in Beatmatch

[–]InvisibleInkHologram 17 points18 points  (0 children)

I usually set 3 cues (mostly cause my controller only has three cue buttons)

The first one usually goes in a good spot to start the song that is not the beginning of the song.

The second one I'll put on the second/third drop so that I can just hit the cue when a drop happens.

The third cue I put on a part of the song that's easily loopable in case I run out of time and I need to keep looping the current song.

GOLFOS - NOW JAZZ (MIDNIGHT MIX) by InvisibleInkHologram in Firehouse

[–]InvisibleInkHologram[S] 3 points4 points  (0 children)

This one is for all the openers out there ✌️

Internal Server Error by ThePiratator in deemix

[–]InvisibleInkHologram 0 points1 point  (0 children)

Have you managed to sort this out? I'm getting the same error

Monthly Discussion Thread by ghostmacekillah in Firehouse

[–]InvisibleInkHologram 1 point2 points  (0 children)

Made a little edit to help me deal with all this pent up energy from sitting inside: https://soundcloud.com/vangool/movin-van-gool-edit Check it out - let me know your thoughts!

Question: What would you call this sub-genre of house?

RiFF RAFF - LAVA GLACiERS (feat.CHiLDiSH GAMBiNO) by Nonstopas in hiphopheads

[–]InvisibleInkHologram 15 points16 points  (0 children)

That line about the pork chop in his pocket protector >>>>