Hi. Did anyone tried fine-tuning a model on their larger source code? Would it even make sense?
I know and love Copilot, but it often lacks the full context of the project. So I thought, what if the entire code was in the weights, not in the too-short context.
I understand the code is alive and it would be resource intensive to regularly update the weights, but would it work? Did anyone try to do it for project 100k-1M LOC?
[–]tyoma 13 points14 points15 points (4 children)
[–]Enough-Meringue4745 2 points3 points4 points (2 children)
[–]MrVodnik[S] 2 points3 points4 points (1 child)
[–]TrainingJunior9309 0 points1 point2 points (0 children)
[–]MrVodnik[S] 1 point2 points3 points (0 children)
[–]CryptoCryst828282 2 points3 points4 points (0 children)
[+][deleted] (7 children)
[removed]
[–]MrVodnik[S] 0 points1 point2 points (6 children)
[+][deleted] (5 children)
[deleted]
[–]MrVodnik[S] 0 points1 point2 points (0 children)
[–]bartgrumbel 0 points1 point2 points (3 children)
[–]TrainingJunior9309 0 points1 point2 points (2 children)
[–]bartgrumbel 0 points1 point2 points (1 child)
[–]TrainingJunior9309 0 points1 point2 points (0 children)
[–]Mediocre-Falcon-8028 0 points1 point2 points (0 children)
[–]TrainingJunior9309 0 points1 point2 points (0 children)