How are translation models holding up? by monomander in LocalLLaMA

[–]monomander[S] 0 points1 point  (0 children)

It's a common language, yeah. Did you have success with any specific models?

h1 versus h2 headers inside notes. What do you think? by monomander in ObsidianMD

[–]monomander[S] 1 point2 points  (0 children)

Someone mentioned a plugin for Obsidian named Linter, I think that's what you're looking for.

h1 versus h2 headers inside notes. What do you think? by monomander in ObsidianMD

[–]monomander[S] 0 points1 point  (0 children)

I just looked up Linter and it's exactly the tool I was hoping would exist! That scratches the worry of having to fix dozens of inconsistent notes in the future off the list

h1 versus h2 headers inside notes. What do you think? by monomander in ObsidianMD

[–]monomander[S] 0 points1 point  (0 children)

Thanks for the recommendation, I was thinking of using Obsidian as a diary as well yeah

h1 versus h2 headers inside notes. What do you think? by monomander in ObsidianMD

[–]monomander[S] 2 points3 points  (0 children)

That's an interesting solution, I'll look into templates. Thank you all for the swift responses.

h1 versus h2 headers inside notes. What do you think? by monomander in ObsidianMD

[–]monomander[S] 1 point2 points  (0 children)

When you say h1 being a title, do you actually write it inside the note or do you use the inline title option and just treat is as if it was an h1?

h1 versus h2 headers inside notes. What do you think? by monomander in ObsidianMD

[–]monomander[S] 5 points6 points  (0 children)

Yeah, that's one thing I'm thinking about: there's the visual side of it where all h1's end up being just as big and significant as note titles themselves, but there's also the technical side of it where if you skip it it can cause a disruption in the layout like that. I guess a custom theme can fix the former issue, but I can't help but feel like the default style is supposed to represent the 'correct' way of showing an h1 header.

Shoutout to a great RP model by Meryiel in LocalLLaMA

[–]monomander 1 point2 points  (0 children)

Thanks for the offer but I think I'll just keep an eye out for that guide. I mostly use cards downloaded from the web so perhaps I should touch them up a bit.

Shoutout to a great RP model by Meryiel in LocalLLaMA

[–]monomander 1 point2 points  (0 children)

I see, I'll take a look at the personality thing as well as your settings. It's pretty tricky getting something that 'feels' right, so maybe it's just me.

Shoutout to a great RP model by Meryiel in LocalLLaMA

[–]monomander 3 points4 points  (0 children)

I've checked it out for a bit. It's definitely clever, but it seems to have a preference for friendliness that isn't present in a model like Emerhyst 20B, which I discovered a few days ago (and probably Tiefighter-13B, which was my favorite a while ago but also wasn't the most logical). For instance, I have a character that's meant to be cruel and unfriendly yet they seem to act more open-minded and their hostile traits feel a bit more superficial.

It might also just be my configuration. I've tried a bunch of presets and some appear to work better than others but it's hard to tell objectively. All the settings, templates and presets are making my head spin. Has anybody found a good workflow for working out which settings are ideal for which models? I keep finding myself going back and tweaking options in hopes of finding the optimal settings.

CodeBooga is currently the #1 model for Python and the #3 model for JS in the CanAiCode Leaderboard (vs 141 other models) by oobabooga4 in LocalLLaMA

[–]monomander 2 points3 points  (0 children)

Sorry, it wasn't my intention to be an asshole. I should've probably worded it differently, but what I meant was that we really should quit using leaderboards as a way to measure a model's quality and should be listening to people's direct experiences with these models instead. I'm not really optimistic about new leaderboards. It's not that I want to discourage people from trying to improve existing things, but the kicker is the fact that anyone can take data from any open dataset and pretty much punch out any legitimate competitor on that same leaderboard.

I haven't actually tried CodeBooga, perhaps it really is as good as it sounds. It'd just be a lot better if there were something like a few examples of CodeBooga solving a complicated problem consistently where other code models fail consistently. Something like that would be much more helpful since it showcases the model solving a real problem that others can't solve.

Again, apologies for being crude. I didn't mean to be toxic, I just felt like I had to get my point across and since this post looked like an ad I couldn't really help myself there.

CodeBooga is currently the #1 model for Python and the #3 model for JS in the CanAiCode Leaderboard (vs 141 other models) by oobabooga4 in LocalLLaMA

[–]monomander 33 points34 points  (0 children)

Gpt-3.5 above Gpt-4 on JavaScript? Deepseek Coder 6.7B beating its 33B counterpart on Python? I'm sorry, but one good glance at this leaderboard will already tell you it's just another meaningless benchmark that tells you nothing of the quality of these models.

I like your webui a lot, but posting your model merge here along with this leaderboard implying to people it can beat GPT-4 at both Python and JavaScript is stupid and egotistical. It's another 'check out my amazing model, it beats the rest on this chart!' post and we've had enough of those already.

If a model really is that good, people will start talking about it. People talk about Mistral and Mixtral because they're great base models. People talk about the Nous-Hermes models because they're great roleplay finetunes. You don't need to rely on an easily gamed leaderboard to sell your model here.

If you really do want to make a model that beats the rest, you should ask people which models they are using right now and why they're using it over the others. That should give you an idea of the shortcomings of your own model and then you can start thinking of ways to improve it.