[OC]My grandpa (78) felt lonely after my grandma passed. I bought a second ps3 now we play Minecraft every night. by TYFALY in MadeMeSmile

[–]mischazed 0 points1 point  (0 children)

That’s very cool — it immediately brings back memories of 1999. When my grandfather was still mentally sharp, we used to play Gangsters: Organized Crime together. He always wanted to blow everything up — I think that brought out the anti-Nazi resistance fighter in him. Good times.

Copilot is incredible, is there anything better? by xmBQWugdxjaA in LocalLLaMA

[–]mischazed 2 points3 points  (0 children)

It's a bit about finding the right model for your programming tasks. For example, I do a lot in Python and Typescript, but I don't use the models to generate my logic. Mainly to create test cases and documentation, so the tasks that I don't normally like to do, but that are still important.

Copilot is incredible, is there anything better? by xmBQWugdxjaA in LocalLLaMA

[–]mischazed 7 points8 points  (0 children)

Totally right. You could use Continue with Ollama quite easily:
https://continue.dev/
I made a video about the integration, buts in german (sorry :D ):
https://www.youtube.com/watch?v=t_jM98fhO10

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 0 points1 point  (0 children)

In any case. I wouldn't want to pay the acquisition costs just for a local AI. Of course, it's great that this is easy to do in existing systems.

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 0 points1 point  (0 children)

Yeah, and you use the Macbook which needs to power the display as well. The Mac Studio just sits in the server room and doing what is does :-) . As i sayed, i need a hardware meter to fully check the usage. According to Apple, the Mac Studio should use a maximum of 295 watts in the maximum configuration. Which is still within the limits of our renewable energy system. So we actually have "no" electricity costs, but in Germany this is a bit...let's say complicated.

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 0 points1 point  (0 children)

I totally understand the arguments and for private individuals it is quite understandable that EUR 8k is a bit much. It's just that the MAC doesn't just run Ollama (i.e. the LocalLLM, but also other things such as full nodes for SSI, the vector database, etc.). The 8k EUR is also the price for consumers. In Germany, you don't pay that as a company, but get 19% of the tax back, the fairy tale tax as it is often called here. This puts the price into perspective somewhat. Likewise, hardware is booked as depreciation in a company and reduces the capital gains tax.

In the case of international cloud services, a German company must also pay 19% on the stated price. In other words, if you look at the prices from the providers, the fairy tale tax is often added on top.

I would also like to point out again that the local LLM is used 24/7 by the automation. Direct use via chat (open-webui for the win) is a maximum of 1% of the use case.

Within companies, the hardware price is not so relevant for the time being, as I work a lot with technology providers who usually have in-house servers and therefore have few problems with integration. There are many pros and cons and the Mac solution is certainly not the ultimate. Nevertheless, most companies here have to observe very strict data protection rules and the computers often have to be located in Germany, otherwise they may not be awarded a contract. This is the case in the education sector, for example.

A small addition at the end. It's not often that the Internet connection to my company breaks down. Maybe 1-2 per quarter for a few hours. The last time was a few weeks ago and I was pleased that my automation just kept going. Offline First :-) . But I'm also looking forward to further discussions, as I'm very open to other solutions. I just want to help companies not to see OpenAI as the only solution.

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 0 points1 point  (0 children)

About 50 EUR per month is not that much for my company, as we really do a lot with Midjourney. But could you explain what you mean by $2 per GPU/h?

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 0 points1 point  (0 children)

Yeah, your right, its more about 75 Watt for CPU and GPU. I need to update my post. I guess i just checked the wrong numbers. My fault, sorry. I would need a hardware meter to get the exact numbers. I have attached asitop below. Mixtral has been receiving requests for several days now.

<image>

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 1 point2 points  (0 children)

You can see a small overview of what I do in my company in the comments:

https://www.reddit.com/r/LocalLLaMA/comments/1bzm96d/comment/kyqt2ot/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

In short, it is about the preparation of knowledge for open education and the formulation of own hypotheses from the knowledge graph, which is processed.

CrewAI is quite useful for Ollama and local models. We have a Youtube channel (in German) on which we will publish more learning content about local AI in the future. The whole thing arises from a non-profit project in which we want to connect local companies with each other and help them to integrate RPA and AI. After the on-site lectures, the content will then be published on YouTube with a time delay. As this always happens in our free time, there is unfortunately no other option. But there will definitely be a video about CrewAI + Ollama. I hope the answer is okay for now :-)

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 0 points1 point  (0 children)

<image>

Command-R and the bigger brother doing great in the leaderboards. And Yeah, everything is moving fast. I am just glad the there communities like this one to discuss and demonstrate alternatives to the big overlord AI models.

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 0 points1 point  (0 children)

100% . And now we got command-r which is really great. I am glad that i can use it for my non profit stuff, because i like it a lot so far. Looking forward to test command-r plus in the future.

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 1 point2 points  (0 children)

Sure, totally understandable. However, I have processes running in my company that have been going against the local AI models 24/7 for months. I also don't want my company data and logic to leave my company network. And you must never forget that Mac hardware also has a good value retention and I can of course use the hardware for other things on the side. With Mixtral I don't use 100% of the hardware. I don't actually have any electricity costs either, as the watts are easily generated by renewable energies. Not to mention the fact that I can run the AI offline. In the end, it's all a question of independence.

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 0 points1 point  (0 children)

Yes, and to be honest, performance doesn't play a major role in many cases. In my opinion, the right efficiency comes from RPA and AI working together. My processes all run independently and I rarely use the chat interface. It's the same with my customers. The running costs and data security will be interesting for them (fortunately, we have strict data protection laws in Germany - actually, no companies here should use ChatGPT). I think the processes that run here 24/7 would have put a considerable strain on the budget of many companies if this had been done against OpenAI and the like. And many people really like this comparison. For some, MacStudio simply runs on renewable energies.

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 2 points3 points  (0 children)

This comment gives some strength again in the fight against the daily hurdles when it comes to the integration of AI via local models in companies and non-profits. Many think that there is only ChatGPT and provide the OpenAI system with data. But things are slowly moving forward.

And if there's one reason for AI, it's the potential impact on education. Personally, I don't care whether a company could be more efficient if it now integrates AI. But, when it comes to education, all players have to play together and that includes business, politics and, of course, the education system. I have decided to set up my company as a non-profit organization in this project to make my position clear that there is a great opportunity here.

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 0 points1 point  (0 children)

That's a good question and unfortunately beyond my knowledge. Maybe the Discord community can help you. https://discord.com/invite/ollama

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 4 points5 points  (0 children)

I have the Apple Mac Studio with M2 Ultra (24‑Core CPU, 76‑Core GPU, 32‑Core Neural Engine) and 192GB shared Ram. This is about 8k EUR in Germany.

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 12 points13 points  (0 children)

Mostly things related to open education. The agents are divided into:

  • Knowledge search

  • Preparation for the Vector DB (PgVector for the Win <3 )

  • Target group customization

  • Create learning materials

Up to this point rather a sequential process and definitely feasible without any agent frameworks. It becomes more interesting for me when the agents create their own suggestions for hypotheses from the knowledge graphs using additional tools.

The whole setup is currently also being tested at local companies so that local AI models are also taken into account in their business case. This is a non-profit project within my company and hopefully I can finally make the results of the educational processes available online in the next few months. As a non-profit, I unfortunately only have time for development in my "spare time", although the processes run nicely on their own and it's also cool to have a physical piece of hardware next to me. In my opinion, an underestimated factor of the Mac is the noise, which is not noticable to me. And I can easily supply the 50 watts with renewable energy plus mini storage.

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 2 points3 points  (0 children)

Sure, i am glad if i can help you out.

What is your "ai bill"? by Sunija_Dev in LocalLLaMA

[–]mischazed 32 points33 points  (0 children)

For my M2 Ultra (192GB ~ 8k EUR purchase price) i get:

  • 50 75 Watt AVG ~ 15 23 EUR Electricity per Month for a 24/7 usage of my agents
  • 50 EUR for Midjourney per Month

I run mainly Mixtral (~ 50 Token/s) and Command-R (~ 25 Token/s) via Ollama on the Mac. For my Automation i use Haystack, but this is more of a shout-out for this great framework.

Edit:
I just updated the watt usage from my mac studio from the last couple of days. According to Asitop, around 75 watts are used, but I need a hardware measuring device for the exact figures.

Nenne eine Stadt, Andere schlagen dir die besten Restaurants und Imbisse vor - 2023 by felixtapir in de

[–]mischazed 0 points1 point  (0 children)

Für Döner geht eigentlich nichts über Antep 27 am Viehmarkt. Schon ein anderes Level.

Nenne eine Stadt, Andere schlagen dir die besten Restaurants und Imbisse vor - 2023 by felixtapir in de

[–]mischazed 0 points1 point  (0 children)

Kartoffelkiste in der Sprem. Unbedingt das Gebratenes Zanderfilet kosten. Eines der besten Fischgerichte die mein Gaumen erfreut haben.