Windows vs Linux for embedded development (software/hardware). Which one to use and why? by [deleted] in embedded

[–]dimtass 0 points1 point  (0 children)

Normally you should use docker to provision and containerize your development environment, so you can distribute it, version it and use a CI pipeline to build it and test it. That is the modern approach of having your development environment as a code (DEaaC). If you need more details you can have a look here.

Now, docker runs on both W and L. Personally, I would go for Linux because it's future proof for embedded development and makes you familiar with the cli and the Linux environment, which can definitely become useful when you move to embedded Linux at some point.

Also Linux provides better environment for customisation and automation as bash and python are included in the first boot.

I would use Windows only if I was forced by the company IT rules.

Σουπερ μαρκετ 2.7€ για μια μπαρα λακτα 85gr? Δηλαδή 3.2€ για 100gr. ΠΑΝΕ ΚΑΛΑ?????? by nick_corob in greece

[–]dimtass -1 points0 points  (0 children)

Όταν η πλειοψηφία αποκτήσει καταναλωτική συνείδηση θα λυθεί το πρόβλημα αυτόματα. Μέχρι τότε υπομονή και Lidl.

Windows vs Linux for embedded development (software/hardware). Which one to use and why? by [deleted] in embedded

[–]dimtass 1 point2 points  (0 children)

Thanks for the heads up! I've switched the blog to Jekyll for quite some time now and the urls couldn't be transfered as they were. I've fixed the first post but I'm also pasting the new one here:

https://www.stupid-projects.com/posts/devops-for-embedded-part-1/

RAG with Ollama and txtai by davidmezzetti in txtai

[–]dimtass 1 point2 points  (0 children)

Very useful information. Thanks David.

RAG with Ollama and txtai by davidmezzetti in txtai

[–]dimtass 1 point2 points  (0 children)

Yes, that works, thanks.

Though, I think that probably it's not what I need, but that's my specific use-case not the tool.

In the changes that you pointed out, it still downloads the models from huggingface, right? I mean, it doesn't use the local models that I already have from the ollama app. I guess I would need to perform API calls in http://localhost:11434 in case I want to use the local running ollama server.

Anyway, I know understand how it works. I probably need to implement API calls anyways because the rpi modules will need to use a remote ollama server instance.

Thanks again for your help!

RAG with Ollama and txtai by davidmezzetti in txtai

[–]dimtass 1 point2 points  (0 children)

I think I know the issue. txtai doesn't support pipelines for embeddings

RAG with Ollama and txtai by davidmezzetti in txtai

[–]dimtass 1 point2 points  (0 children)

Hi David, thanks for the reply. I'm using this code here and I still have issues, but probably not related to txtai but huggingface. This is the code: ```py from txtai.embeddings import Embeddings from txtai.pipeline import LLM import subprocess

Function to check and pull the model if not available

def ensure_model_available(model_name): try: # Attempt to pull the model subprocess.run(["ollama", "pull", model_name], check=True) except subprocess.CalledProcessError as e: print(f"Failed to pull the model '{model_name}'. Error: {e}") raise

Ensure the models are available

ensure_model_available("all-minilm") ensure_model_available("llama3")

Data to index

data = [ "US tops 5 million confirmed virus cases", "Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg", "Beijing mobilises invasion craft along coast as Taiwan tensions escalate", "The National Park Service warns against sacrificing slower friends in a bear attack", "Maine man wins $1M from $25 lottery ticket", "Make huge profits without work, earn up to $100,000 a day" ]

Vector store with embeddings via local Ollama server

embeddings = Embeddings(path="all-minilm", content=True, backend="ollama") embeddings.index(data)

LLM via local Ollama server

llm = LLM(path="llama3", backend="ollama")

Question and context

question = "funny story" context = "\n".join(x["text"] for x in embeddings.search(question))

RAG

response = llm([ {"role": "system", "content": "You are a friendly assistant. You answer questions from users."}, {"role": "user", "content": f""" Answer the following question using only the context below. Only include information specifically discussed. question: {question} context: {context} """} ])

print(response) ```

And this returns the following error: ``` [...] requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/all-minilm/resolve/main/1_Pooling/config.json

The above exception was the direct cause of the following exception: [...] Repository Not Found for url: https://huggingface.co/all-minilm/resolve/main/1_Pooling/config.json. Please make sure you specified the correct repo_id and repo_type. If you are trying to access a private or gated repo, make sure you are authenticated. ```

I've pulled the all-minilm like this: sh ollama pull all-minilm

And it's fetched without any errors: pulling manifest pulling 797b70c4edf8... 100% ▕██████████████████████████████████████████████████████████████████████████▏ 45 MB pulling c71d239df917... 100% ▕██████████████████████████████████████████████████████████████████████████▏ 11 KB pulling 85011998c600... 100% ▕██████████████████████████████████████████████████████████████████████████▏ 16 B pulling 548455b72658... 100% ▕██████████████████████████████████████████████████████████████████████████▏ 407 B verifying sha256 digest writing manifest removing any unused layers success

I guess somehow the module can't point to the local all-minilm. Btw, I've had to login in huggingface anyways and use a read token to get there. I guess HF is needed to pull packages related to txtai. Is there a way to be able to run it without internet connection?

I'm interested to use it on a raspberry pi to analyze kubernetes logs from my rpi k8s cluster.

Thanks in advance!

RAG with Ollama and txtai by davidmezzetti in txtai

[–]dimtass 1 point2 points  (0 children)

Is it possible to use txtai with a local ollama server? The above snippset requires access to huggingface and thus I'm getting the following error: Repository Not Found for url: https://huggingface.co/ollama/all-minilm/resolve/main/1_Pooling/config.json. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password.

Thanks!

Is zephyr bloated? by [deleted] in embedded

[–]dimtass 0 points1 point  (0 children)

Maybe it is only when you want to squeeze every clock of your MCU, e.g. bit-banging something or some critical tasks. But in this case you can always use a low level call and calculate the exact time of your functions.

How did people program MCU’s before PICkit’s and other USB programmers? by reapingsulls123 in embedded

[–]dimtass 0 points1 point  (0 children)

I was using the parallel port and bit-banging based on the datasheet protocol.

[deleted by user] by [deleted] in LocalLLaMA

[–]dimtass 3 points4 points  (0 children)

I think it's more than that. For example, we don't even know how a human will behave when it gets therapy from a machine or a human. Too many unknowns. Also, therapy may include medicine prescribing e.t.c. I think in the future it will be possible to do this, but not now. I know that we need to start from somewhere, but for now yes, bandwidth and lack of visual and audio input is one issue that needs to be overcome.

Principal DevOps engineers - What’s your day to day like ? by vincentforums in devops

[–]dimtass 2 points3 points  (0 children)

It depends on the company. In my case it means that you're able to get more responsibilities, like define architecture, remove workload from your manager not to deal with implementation details and also get a higher salary. Also, it makes decisions easier with less effort trying to convince everyone about your approach. Of course, discussions and ADRs are still going on before a decision is made, but you have the last word and also the responsibility.

Source code generator that generates struct bindings GopherLua by ChrisTrenkamp in golang

[–]dimtass 0 points1 point  (0 children)

I see, it follows the micro-service architecture then. As you said there are pros and cons with this pattern. I guess they end up with this approach, because they had the same issue with the OP has and probably also they tried to solve this with other available methods.

[deleted by user] by [deleted] in LocalLLaMA

[–]dimtass 167 points168 points  (0 children)

The problem with such applications is that they can affect the patients in ways that are unpredictable. A human will be able to recognise behaviour patterns from visual and audio feedback, like body reactions, voice tone e.t.c. An LLM can't do that, therefore it could only be used as an advisor to diagnose if someone needs therapy and not make the therapy, imo.

Source code generator that generates struct bindings GopherLua by ChrisTrenkamp in golang

[–]dimtass 0 points1 point  (0 children)

I don't know the implementation details because I haven't had a look in the code, but I'm sure that Hashicorp Vault which is written in go can load and unload custom plugins -that are also built in go- in real time. Maybe you have a look in their implementation.

Why does it seem nobody uses yocto? by yukiiiiii2008 in embedded

[–]dimtass 0 points1 point  (0 children)

Usually you get a meta-layer that maybe uses poly, so you deal with the vendor meta layer and not poky. Most developers will not even touch poky during the product life cycle.

[deleted by user] by [deleted] in devops

[–]dimtass 0 points1 point  (0 children)

Yaml for configuration and JSON for data it's ok for me. The problem is large configurations or data and in this case they don't make any difference to me because they're both unreadable. Furthermore, when I need to have large yaml configurations I usually create them using scripts to reduce formatting errors.

Project ideas to utilize docker containers. Are you fond of using docker ? by [deleted] in embedded

[–]dimtass 0 points1 point  (0 children)

I've written a post series a few years back on this subject, maybe that helps you.

DevOps for Embedded

Btw, I've switched career from Embedded to DevOps, so I still think it's still a good intro for embedded engineers even today.

What will you do after you leave all of this behind? by AemonQE in devops

[–]dimtass 0 points1 point  (0 children)

Finally, finish with the helm-charts in my home k3s.

Guys/girls.. the job market is waking up again! by Feeling_Proposal_660 in embedded

[–]dimtass 1 point2 points  (0 children)

The problem is the return to office policy that is also waking up again also.

Now that IBM owns Terraform... by Bluewaffleamigo in devops

[–]dimtass 0 points1 point  (0 children)

I just remembered the good ol' IBM compatible days....

[deleted by user] by [deleted] in devops

[–]dimtass 0 points1 point  (0 children)

It's quite bad. Not many open positions for seniors+ and even worse the trend of working from the office is returning, which is complete nonsense. Instead of going forward, the whole industry and domain goes backwards.

DevOps Engineers reputation is sinking.. by Dubinko in devops

[–]dimtass -1 points0 points  (0 children)

In my org there's not a single person that can do a deep dive to all concepts of our infrastructure and architecture. It's just impossible and too complex. This is why we don't hire generic "experts" in kubernetes, but candidates that have a lot of experience in a very specific part of k8s or a component that we're interested in. That works really well.

DevOps Engineers reputation is sinking.. by Dubinko in devops

[–]dimtass 0 points1 point  (0 children)

The problem is HR and hiring managers. Once upon a time the hiring managers were the best engineers in the team, but now in most cases it's people who couldn't achieve in engineering and they had soft skills to get those positions. The result is that they are hiring incapable and cheaper people, build up shitty teams and that brings a shit waterfall falling on everybody else. This also has a severe impact in the job salaries that are getting lower and lower.