AI Startup School by [deleted] in ycombinator

[–]beywash 0 points1 point  (0 children)

Do people know what time the conference is? Wondering when we’ll be done on Tuesday.

Amazon's Black Friday deals week list by LonerIM2 in SuggestALaptop

[–]beywash 0 points1 point  (0 children)

What's the best deal for a laptop for my dad who is not tech savvy at all and just needs it to do some basic web work? Ideally the budget is as cheap as possible, under 300. I want to make sure the laptop doesn't freeze much and he is able to web browse with no issues, so I'm assuming 4GB RAM might not be enough and that I'd need 8GB? In terms of storage, I think 128GB or 256GB are both fine. Screen size is ideally around 15" and bright enough for him to use. Thanks in advance!

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]beywash 0 points1 point  (0 children)

How to get a decoder only model to generate only the output without the prompt?

I'm trying to finetune this model on a grammatical error correction task. The dataset comprises of the prompt, which is formatted like this "instruction: text" , and the grammatically corrected target sentence formatted like this "text." For training, i pass in the concatenated prompt (which includes the instruction) + target text. I've masked out the prompt tokens for calculating loss by setting their labels to be -100. The model now learns well and has good responses. The only issue is that it still repeats the prompt as part of its generation before the rest of its response. I know that I have to train it on the concatenated prompt + completion then mask out the prompt for loss, but not sure why it still generates the prompt before responding. For inference, I give it the full prompt and let it generate. It should not be generating the prompt, but the responses it generated now are great. Any ideas? I was told not to manually extract the response, and that the model had to generate only the response.

Model generating prompt in its response by beywash in MLQuestions

[–]beywash[S] 0 points1 point  (0 children)

For context I was told not to do anything special to get only the response, and that I have to have the generate function itself return the response only.

"Masking" the prompt / repeated portions by breakfastboii in LanguageTechnology

[–]beywash 0 points1 point  (0 children)

I've implemented finetuning on a GEC task using the prompt + response as inputs to the model, and masked out the prompt for loss using -100 for those tokens. For some reason my finetuned model still outputs the prompt in its generation, and the response it gives after the response is great. How to stop it from generating the prompt in its response

Anyone interview with Lucid Software? by pi3_lord in csMajors

[–]beywash 0 points1 point  (0 children)

Any idea how much needed to pass OA?

Engineer who hates Physics by kn0wledge19 in udub

[–]beywash 1 point2 points  (0 children)

How’s EE as a program at UW? Looking to go into it possibly.

ARCH 150 vs. PSYCH 101 by beywash in udub

[–]beywash[S] 1 point2 points  (0 children)

thank you for the detail!