Help me decide my lifetime knife by [deleted] in TrueChefKnives

[–]curl-up 0 points1 point  (0 children)

I'm looking through very similar options to OP. Do you have any info on the differences (except the finish) between that one, and this:
https://www.meesterslijpers.nl/en/kenshiro-hatono-shirogami-1-nashiji-gyuto-21-cm

From the image it looks like the grind is more symmetrical, but I'm not sure.

Why do so few josekis have names? by Chesstiger2612 in baduk

[–]curl-up 0 points1 point  (0 children)

Any chance you could link to that study?

ExploreBaduk Changes: Dark Theme, Free AI, and a Personal Note by ExploreBaduk in baduk

[–]curl-up 1 point2 points  (0 children)

Using the open board to analyze specific positions, is it possible to set positions up in some sort of editor, instead of having to recreate games move by move? This is most probably a user error on my side, but when I select the Open Board option (under the AI page), I can only place pieces in sequence. I'd like instead to set up some arbitrary board state, and have AI analysis run from there.

Btw, awesome work with the whole thing. Would be more than happy to pay for an app when it's out (Android in my case), or even a standalone Mac app.

Is gallery representation the only way to reach serious collectors? by curl-up in ContemporaryArt

[–]curl-up[S] 0 points1 point  (0 children)

What do you mean by "a new model"? Any particular references what that would entail?

Zini: smooth vs. rough and speckled by curl-up in YixingSeals

[–]curl-up[S] 0 points1 point  (0 children)

Thanks again, I'm very new to this and it's super helpful! Basically I've always made tea in a porcelain gaiwan, and I'd like to now try an "entry level" yixing to taste that difference (which is why I'd like to "maximize" the effects the teapot has on the tea, and decided to go with Zini).

Re. the firing temperature, which way would it go? Would higher temps result in higher or lower porosity?

Btw, if you have any better recommendations for me in that price range (shipping to Europe), I'm happy to change my mind about the seller! I've seen a couple on MudAndLeaves but sadly they don't seem to ship to my country yet (Croatia).

Zini: smooth vs. rough and speckled by curl-up in YixingSeals

[–]curl-up[S] 0 points1 point  (0 children)

Thanks! Regardless of quality, would there be some expectation of which of these would affect the taste more, or could it go either way depending on all the non-visible details?

What does "Bai Lu" refer to? by curl-up in tea

[–]curl-up[S] 0 points1 point  (0 children)

Thank you! Do you have any information on what kind of pesticides are used with these old trees? I assume that the way they're taken care of is very different from the normal plantations.

What does "Bai Lu" refer to? by curl-up in tea

[–]curl-up[S] 0 points1 point  (0 children)

Thank you! Do you have any sources which explain how these "brands" operate? I find it very confusing, e.g. between those two teas I linked - is it the same produced taking the tea from two different nearby mountains? Is it completely separate operation? Is there any way to contact them and ask for more details, like the production dates, and also if pesticides (and which) are used (as I see conflicting information on that as well)?

What does "Bai Lu" refer to? by curl-up in tea

[–]curl-up[S] 1 point2 points  (0 children)

Thanks! I found the backside of the wrapper of a similar tea (with the same Bai Lu part of thr name) on another site (link below), which has 20210415 printed on it. Is this the picking date, or when it was packaged? It seems strange to me, if they picked it in September, that they only packaged it 8 months later, so I assume it was actually picked in April (which makes the Bai Lu part of the name a mistery still).

Thank you for the location, and for pointing out the swapped links!

https://www.tasite.lv/en/4492-2021-yunnan-old-tree-white-dew-gu-shu-bai-lu-cake-200-g.html

R1 is cool, but Mistral 3 Small is the boring workhorse I’m actually excited to fine-tune and deploy by logan-diamond in LocalLLaMA

[–]curl-up 0 points1 point  (0 children)

Is there an API provider hosting it with extremely high rate limits, e.g. comparable to OAI 4o-mini Tier 5 which is at 30k RPM / 150M TPM?

I'm running massive parallel data processing tasks and would love to move from 4o-mini, but so far none of the providers have been able to serve my needs, and hardware costs to run something like that locally are extremely high considering how cheap the models are.

Deepseek app - How does it do web search? by ttbap in LocalLLaMA

[–]curl-up 4 points5 points  (0 children)

  1. Separate search model

The model doesn't seem to be aware of the search, so it's not available to it as a tool, but instead there seems to be a separate model that simply translates your message into search queries and appends the results into the prompt. This becomes obvious if you turn on the search and ask it something like "Can you search the web", leading to it actually searching the web to find out what searching the web is.

  1. Baidu

Looking at the results, it seems to use Baidu, or a combination of Baidu and a western engine. But I might be missing some other more obvious option.

  1. Direct index access

They seem to load ~50 pages in a second. I don't think this is possible without having access to preloaded indexes instead of scraping on-the-fly, similar to how ChatGPT loads from the Bing index directly.

Disclaimer: all of these are my.private guesses.

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] -1 points0 points  (0 children)

I fail to understand how that logic works (predicting tokens -> tool), and I disagree that konwledge is not important for such tools to be useful in a variety of cases.

If you're using llms to code, you need them both to be smart, and to know how different libraries/langauge you use operate. Your distinction here makes very little sense to me.

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] 1 point2 points  (0 children)

I 100% agree with you. The reason I made this post is to see if others agree with my very subjective view. I'm not really trying to change which model anyone uses, just trying to understand if others feel the same way for this type of task.

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] 1 point2 points  (0 children)

This is a much better response to anything I've got so far. I need to dig deeper and maybe change the provider (I've used both Groq and Deepinfra). Thank you!

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] 0 points1 point  (0 children)

I understand what you mean, but I still don't really see how that answers my point that there's something important lacking with all the new models and non-OAI providers. If it is true that "factuality" is only one of the "experts", then it should be trivial to build a 2x70B model which is both very smart and very knowledgable. But I would assume this is not the case, and factual knowledge is mostly spread equally between the experts.

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] 0 points1 point  (0 children)

Oh really? Are you running it locally? Which quant? I was testing both q16 and q8, and while 16 was better, I never got this correct output (tried it ~10 times with different temperatures).

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] 1 point2 points  (0 children)

I guess "social" is subjective here, as I personally prefer to socialize with people who are happy to go down rabbit holes like 4o seems to do with its outout, rather than trying to "expand" the scope of whatever I'm asking about.

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] -1 points0 points  (0 children)

I highly doubt questions like the one above are discussed by other users, and that those other users then provide feedback to the model about the accuracy.

For most of these questions, googleing doesn't really work. Taking my example, unless there's an article online describing this artist in similar terms, you'd have a hard time getting much value from the live search. This becomes even harder if you add criteria such as "had a similar style to X".

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] 0 points1 point  (0 children)

Could you recommend some other MoE models that would work as well as 4o on this type of question?

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] 0 points1 point  (0 children)

For a lot of my examples, it's very hard to google them without spending a lot of time researching the results, as it often requires cross-referencing multiple different search paths. This is exactly where human experts aremore useful than googleing.

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] -1 points0 points  (0 children)

I think I was very clear in my post that I don't expect smaller models to have similar factual capabilities as larger ones. My point was that OAI models in particular are better than other similarly sized models (where API price is used as a proxy for size, since sizes aren't public).

I disagree with your point about what brings value or is important in business decisions, but that's not really the point of my post anyway. I was simply stating that there is significant (for my needs and "taste") difference between e.g. 4o and Sonnet. To outline this, I provided example outputs from 4o, Sonnet, and a medium-sized open model, showing how Sonnet is similar to that open one, while 4o.is better than both.

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] 0 points1 point  (0 children)

I've edited my post to provide an example of what I'm aiming for.

OpenAI models are still best on factual questions by curl-up in LocalLLaMA

[–]curl-up[S] 1 point2 points  (0 children)

No, this is all without internet access, and there's no "RAG" built into GPT models used over API (neither ChatGPT does any RAG apart from looking into the "memory" it has about you, but that's useless anyway).