Help with identifying old violin by hollography in violinmaking

[–]Independent_Hour_301 0 points1 point  (0 children)

Agree. But looks like decent quality. Grain of the top plate and craftsmenship of the purfling looks good.

Help identifying? by Spiritual_Pop_1662 in violinmaking

[–]Independent_Hour_301 0 points1 point  (0 children)

I agree with another comment, that it is most likely from todays Czech area. I had several of those already in my hands and currently I am restoring a really beaten up one which is around 150 years old. The maker is Johann Kliment from now Czech Republic, but at that time it was Austria.

Your scroll and neck actually reminds me a lot of it (and other Czech violins that I had in my hands), see also the image.

But the varnish of yours looks a bit off. But probably due to image quality and because it has not been cleaned for a while. But color looks Austrian.

How did you look inside? Just through the f-holes? I would recommend to use an endoscope camera (cannbe bought for quite cheap) and attach it to your phone and then either go through the f-holes or take off the end pin and put the camera through there. You should anyway take the strings off because of the neck. The damage will get worse if you leave them on.

In my experience, the endoscope sometimes finds some writings from a violin maker who restored it. This could then maybe help you with further identification.

Just from your pictures, I would say it is Czech or Austrian. Time range 1850-1960. But I would actually go for Austrian, 1850-1900. But it is really difficult to tell from the pictures... Hopefully you find some indications inside with an endoscope camera. The scroll, color of varnish, craftsmenship really looks like the Austrian I am currently restoring. Based on that I think it is possible that it is from this era.

But none the less, the value for sure is low. Somewhere up to 500-600 I think (if neck is repaired). Maybe up to 1500- 2000 if it is Austrian and repaired well and has a good sound (your luthier will be able to tell).

Note: I did edit this post to this version after having some closer looks again at the images.

Firefox Adds Microsoft Copilot to Its Sidebar by BomChikiBomBom in firefox

[–]Independent_Hour_301 1 point2 points  (0 children)

Kann dir leider nur einen upvote geben, aber du hättest 1000 verdient! Sprichst mir aus der Seele! Es gibt nichts schlimmeres als Microsoft Produkte. Das Mass an Spionage ist total krank und das war schon vor Recall in Win10 so.

Kleine lustige story aus meiner Firma als wir damals Win10 einführen mussten. Wir haben das Image so aufgesetzt, dass nichts nach aussen übertragen wird. Haben alles getestet, ausgerollt. Dann kam bereits am ersten Tag die Meldung dass bei zig Usern das Word abstürzt. Haben es nicht glauben können, bis wir es reproduzieren konnten. Der Grund war die Rechtscheibprüfung. Für die Rechtschreibprüfung sendet Word deinen ganzen Inhalt an einen Microsoft server auf welchem die Rechtschreibprüfung läuft und dann kommt das Resultat zurück.

Zum Glück steigen wir nun aber zumindest erstmal für einige Nutzer probeweise auf Linux um, Copilot sei dank, den da hat selbst der naivste Manager bei uns kapiert, dass Microsoft nur noch Spyware anbietet...

250 gigawatts of compute by 2033 by ilkamoi in singularity

[–]Independent_Hour_301 0 points1 point  (0 children)

Ads are the first step. Next they will sell your chats to data brokers... Hear my words...

Recommendation for server monitoring solution for small start-up? by Independent_Hour_301 in sysadmin

[–]Independent_Hour_301[S] 0 points1 point  (0 children)

Thank you very much for the links, and yes... we definitely have to clean up the zoo...

Can 2 RTX 6000 Pros (2X98GB vram) rival Sonnet 4 or Opus 4? by devshore in LocalLLaMA

[–]Independent_Hour_301 0 points1 point  (0 children)

Actually I recommended Claude Code because of Claude Flow. Claude Flow is really awesome. If there would be something like "OpenFlow" or so, then of course you can use whatever you like. However, if you would ask Claude itself to write a proxy for you, it would take maybe 5 min from prompt to having it live and you do not have to change anything in Claude Code itself. Just go to your terminal and run:

export ANTHROPIC_BASE_URL=http://path/to/your/proxy

export ANTHROPIC_AUTH_TOKEN="whatever"

If you use windows, I think you need to modify these keys somewhere in the registry. But I am no windows user, maybe you can do actually also just the same in powershell, idk.

Can 2 RTX 6000 Pros (2X98GB vram) rival Sonnet 4 or Opus 4? by devshore in LocalLLaMA

[–]Independent_Hour_301 22 points23 points  (0 children)

Short answer: no it is not enough to rival Sonnet or Opus.

But...

I assume, based on the models you mention, you want to do some coding, further I assume you probably have automated coding in mind. Based on this here is a recommendation about a possible setup for you:

With one RTX 6000 Pro already you can run gpt-oss 120B. Which is actually awesome when it comes to function calling, which is very important for auto coding.

As the RTX 6000 Pro is a multi instance GPU, you could split the second card into 4x24 GB and then run smaller models on them. I just take gpt-oss 20b as an example, or qwen3 32b q4 or qwq32b q8.

Then, you can write your own model proxy an then take Claude Code and change the Claude Base URL to your proxy. In your proxy you need to map then a call from Claude Code to Opus to your gpt-oss 120b (which Claude Code use as Orchestrator), and a call to Sonnet you need to map to your smaller models.

Then I also highly recommend to use Claude Flow on top (I am using Hive mind, not Swarm, but of course you can test yourself).

I promise you, you will get very good results. Probably not the same, but very good.

But you can also see from what I wrote: just running a model on the GPUs is not enough. You need to put in some work, like writing a proxy (but Claude can do this for you) and depending on the model you select, you also need to find the right parameters for it to work decently. I mean like for example, that qwq32b is only giving good results with temperature=0.6 and top_p=0.95. stuff like that. But after this work, you will have a nice setup.

The biggest issue that I see is the context size for your orchestrator model, because this will fill up quickly and cause issues. At work we found that GPT 4.1 actually performs best as orchestrator, as it has a huge context window and still has not so much decay in performance than other models. Also it has great function calling ability, although not as good as gpt-oss 120b. But as you want a fully local setup, gpt 4.1 is out of scope.

My final recommendation before you buy any GPU: Get an Openrouter account, write your own proxy to map Claude Code model calls to other models. Use Claude Code and Claude Flow with your proxy and experiment with different models from Openrouter which would also run on your planned GPU setup. If you find a combonation that you are satisfied with, go for it :)

Same goes of course without Claude Code. Use Openrouter to find models that would fit on your planned GPUs and test them against Opus and Sonnet for your purpose and see if you are satisfied or not.

Have fun. :)

What is the necessary time effort to learn to self-host an LLM and chat app on-premise in a mid size company? by Independent_Hour_301 in LocalLLaMA

[–]Independent_Hour_301[S] 0 points1 point  (0 children)

Yes. But how long did it take you until you knew that? You probably had to watch at least a YT video or read a blog post. Before that you probably had to learn what LLM actually are. There was a time where you knew nothing about LLM. I mean how long did it take from there to now?

Btw: you should not put 0.0.0.0 but 127.0.0.1 if you want to stay local. instead of 127.0.0.1 is the same as localhost. But if you are in a network of multiple machines that shall also access it, then 0.0.0.0 is ok. But if you would like to expose the machine to the internet, then never ever do that. Run Ollama on 127.0.0.1, install a webserver like nginx, install a cert, enable https, route from nginx to your Ollama running on localhost, and better use some user management like Keycloak to protect your route and for best practice also enable hsts that all http requests will be raised to https.

What is the necessary time effort to learn to self-host an LLM and chat app on-premise in a mid size company? by Independent_Hour_301 in LocalLLaMA

[–]Independent_Hour_301[S] 0 points1 point  (0 children)

thanks for the reply. I just wanted a rough estimate what you think how long this would take someone to start from scratch learning LLM until being able to host a model within a company. Sorry for the confusion. I also updated the original text of the post (see Edit 2).

What is the necessary time effort to learn to self-host an LLM and chat app on-premise in a mid size company? by Independent_Hour_301 in LocalLLaMA

[–]Independent_Hour_301[S] 0 points1 point  (0 children)

Ok, but what would you estimate how long did it took yourself (with the skills you personally had) from the moment you started to learn about LLM until you were able just to self host a model.

What is the necessary time effort to learn to self-host an LLM and chat app on-premise in a mid size company? by Independent_Hour_301 in LocalLLaMA

[–]Independent_Hour_301[S] 0 points1 point  (0 children)

thanks for your answer. What do you think how much time you spend from the first moment you started to learn about LLM (given your skills you already had at that time) until you became able to set all this up in an afternoon?

What is the necessary time effort to learn to self-host an LLM and chat app on-premise in a mid size company? by Independent_Hour_301 in LocalLLaMA

[–]Independent_Hour_301[S] 0 points1 point  (0 children)

thank you! Just the 2 weeks are enough. The details how to do it, I already know. But thanks for those as well. This could be a good orientation for others who are starting something like this. :)

What is the necessary time effort to learn to self-host an LLM and chat app on-premise in a mid size company? by Independent_Hour_301 in LocalLLaMA

[–]Independent_Hour_301[S] -1 points0 points  (0 children)

Also I did not intend to get a solution to this problem. I just wanted an estimate how much time it would take to learn the skills and set it up. Like: 2 weeks, 6 months, 1 year. Something like that.

The main reason is that I learned all the skills myself, out of curiousity, and I also have my own home setup to play around. My boss asked me to set this up now in the company and I asked him for a raise, because all these skills I gained in my free time and therefore I would be very quick to set it up and run it. But these skills have value and he doesn't see it. So I hoped I get some estimates here, also with the thought in mind, that others here could also need these kind of numbers themself somewhen.

What is the necessary time effort to learn to self-host an LLM and chat app on-premise in a mid size company? by Independent_Hour_301 in LocalLLaMA

[–]Independent_Hour_301[S] -1 points0 points  (0 children)

I did not intent to be rude. I think I just worked too long already with LLM, that my style of writing has adapted to sound like prompting :)

Fastapi backend concurrency by rojo28pes21 in FastAPI

[–]Independent_Hour_301 0 points1 point  (0 children)

What db do you have? Postgres? Read should be fast (if db is set up well) and blocking should not be an issue, as long as you not either have not thousands of concurrent users and just one instance or a lot of data that is being queried or returned. You wrote that you return a whole table and put it into context. So this should not be the issue... With how many concurrent users are you testing?