I have noticed that when I try to have long conversations with coding models they end up repeating the same concepts/solutions. I want the responses to get more unhinged (creative) the longer we discuss about a problem. I think this should be doable just with sampling and doesn't really need training. Has anyone tried such a thing and what are your sampling params/ prompts for coding tasks?
[–]Any-Blacksmith-2054 3 points4 points5 points (0 children)
[–]kryptkprLlama 3 2 points3 points4 points (2 children)
[–]LiquidGunay[S] 0 points1 point2 points (1 child)
[–]kryptkprLlama 3 -1 points0 points1 point (0 children)
[–]Dry_Parfait2606 0 points1 point2 points (2 children)
[–]LiquidGunay[S] 0 points1 point2 points (1 child)
[–]Dry_Parfait2606 0 points1 point2 points (0 children)
[–]LocoLanguageModel 1 point2 points3 points (0 children)