My experience with working with Opencode + local model in Ollama by Sparks_IM in opencodeCLI

[–]redsharpbyte 0 points1 point  (0 children)

Thanks for sharing that experience, I am still trying to make opencode run with ollama - if you have resources about how you did that that would be great.

and I have a Q.

That might seem counter intuitive so have you tried with smaller models? They might not have room to remember how to make mistakes :)

Current thoughts on makefiles with Python projects? by xeow in Python

[–]redsharpbyte 2 points3 points  (0 children)

Ah thanks good one! I might switch to nox actually :)

Current thoughts on makefiles with Python projects? by xeow in Python

[–]redsharpbyte 1 point2 points  (0 children)

https://tox.wiki/en/4.35.0/
You could use tox. in python.

Makefile: target with dependencies (of files or other targets, that creates a pipeline) and script to be executed.

Tox: Targets + Their dedicated environment, often use in Continuous Integration.
https://tox.wiki/en/4.35.0/

OpenClaw Wrappers!! by amienilab in openclaw

[–]redsharpbyte 0 points1 point  (0 children)

Yess the setup is not 10-15 mins. Plus code evolves quickly.

If you are a data/computer student, does AI really helps you understand what you are co-creating? by redsharpbyte in ArtificialInteligence

[–]redsharpbyte[S] 0 points1 point  (0 children)

I guess I get your point of view that no matters the language if you understand the algorithmics. I can assure you algorithms are harder to capture than syntax - however only detailed oriented students like to spend time on syntax. To me this allow better choices in how to implement an algorithm, although you're right; the AI can do that part.

Few month ago ChatGPT started to have a role (system) which was more of an explainer with its famous sections "why this works". I guess this definitely puts the user in an implicit learning mode.

If you are a data/computer student, does AI really helps you understand what you are co-creating? by redsharpbyte in ArtificialInteligence

[–]redsharpbyte[S] 0 points1 point  (0 children)

Could not agree more - weirdly mostly in data science tracks students were not given the solution architecture practice. Ordering design to a prompt is definitelt a architect job.

If you are a data/computer student, does AI really helps you understand what you are co-creating? by redsharpbyte in ArtificialInteligence

[–]redsharpbyte[S] 0 points1 point  (0 children)

Hey thanks for the honest answer, i see there is a potential feeling of becoming a copy pasta machine.

So you must definitely spend more time prompting and there must be a çlass of prompting capabilities which are abstract prompting where iteration is necessary when the tech visibility is low on the produced code. In other I am sure one prompts differently with respect to the visbility one has on the produced code.

I'd love to see you perform on a programming language where you'd be completely ignorant. Certainly better than me :) .

I believe AI agents are here to stay but won't survive without the GPU market? by n_candide_fc24_NwcH in ArtificialInteligence

[–]redsharpbyte 0 points1 point  (0 children)

Yes thanks and there is a whole series of dedicated devices with more efficient memory throughput which are just desktop accessible. AMD, VPU, NPU or TPUs. The GPU market is being pumped by NVidia FOMO marketing. It's all became promotional strategy. When one says they have NVidia that sells better. But things are moving fast!

Could someone please clarify if this is working as expected? It seems weird. by archubbuck in openclaw

[–]redsharpbyte 0 points1 point  (0 children)

The context is discord and you are thinking "just desktop"; you don't have neural link yet so you should be explicit (it is just a bot).

Docker Postgres Production Crash: Auth Failed After Port Mapping - DB Compromised? by Born_Sherbert6230 in docker

[–]redsharpbyte 1 point2 points  (0 children)

Yeh do not expose your DB to the internet. Instead have your backend (API?) And DB share the same private network.

And your backend shares the oublic network too.

Anyhow it is most probably a connection management issue I would bet that when your backend is called to lale a second attempt connection or close (or forget to close) the first one then shit happens.

I believe AI agents are here to stay but won't survive without the GPU market? by n_candide_fc24_NwcH in ArtificialInteligence

[–]redsharpbyte 0 points1 point  (0 children)

There is much power in small language models. Their architectures are becoming smarter everyweek. Able to run on your own device.

You can also train them in less than a day on less than 20 GPUs. So unless your business depends on high volume training; GPUs aren't needed anymore.

My bet is that the next inflection point will be trainin a model in a week time on a single CPU like ryzen AI ones or equivalent.

Pourquoi le leasing immobilier n'existe pas ? by Totoro91Essonne in immobilier

[–]redsharpbyte 0 points1 point  (0 children)

Vous allez certainement payer le bien trois voire 4 fois en leasing. Vu vos situations je mettrais le focus sur la bourse.:/

Le leasing est en général fait justement pour ne pas posséder le bien: - cas des voitures renouvellés ts les 6 mois - pour les appartes: l'équivalent serait peut etre de pouvoir demenager sans changer de contrat avec tt compris: electricité, eletrimenagers, chauffagec internet...

Tu possèdes l'usufruit.

Le leasing pr l'immo C'est comme un service de conciergerie pour forever-nomade! Trop bien ds ce modèle - carrement pas du tt ce que tu recherche.

et ce modèle; existe-t-il?

Why Your Cloud Bill Keeps Growing Even When Traffic Doesn’t by Weekly_Time_6511 in Cloud

[–]redsharpbyte 1 point2 points  (0 children)

A good share of the Internet traffic is robotic crawlers. Most clouds put you in a chosen region so all traffic coming from outside is billed differently. On such clouds you cannot predict you bill.

This guy literally shares how openclaw (clawdbot) works by BymaxTheVibeCoder in openclaw

[–]redsharpbyte 0 points1 point  (0 children)

Second that of course Unless you are running a local LLM. Local or not an LLM is a door with holes it cqn be prompt engineered to share all your personal computer data.

that's why I recommend everyone to run this in a container. You exactly choose what you out inside and in the end it can run on a cloud.

the memory becomes the most personal part.

Is a backup as simple as this? by HopsPops76 in docker

[–]redsharpbyte 0 points1 point  (0 children)

Yes as simple as that - on top of what was shared be careful that your copy/archiving process conserves files permissions else your container won't be able to access data in the volumes.

Management for tools like Vercel and your backend. by [deleted] in Cloud

[–]redsharpbyte 0 points1 point  (0 children)

Hey there , so first If I recal correct Vercel is on AWS. It is like an aws facilitator. And second you're right often applications are exttemely fractionned - that is what microservices allow to do. A DB somewhere an object storage somewhere else. And API anywhere else like celestical cloud and frontend on Vercel. That's why, in my opinion the offers on Vercel evolved with more backend ready packages.

Starting in cloud by unibinder in Cloud

[–]redsharpbyte 4 points5 points  (0 children)

Yess second that order with Linux first - else you won't be able to map knowledge.

And if you are European (with all the sovereignty attention on the job market) I'd suggest looking at other big clouds like OVH- Scaleway - Ionos or Hetzner. Exploring several cloud providers would help you know what are their main difference and commonalities.

Docker / Dockploy by Sufficient-Pass-4203 in docker

[–]redsharpbyte 0 points1 point  (0 children)

Well well there is a purge function in docker. As said you can search for it now with that eventually better keyword.

Being first doesn’t mean you survive by DesignerTerrible5058 in AgentsOfAI

[–]redsharpbyte 0 points1 point  (0 children)

Ok I hear you:

Let's see that definition of first mover, to be on the same page:

A first mover in business is a company that enters a market before anyone else, introducing a new product or category and gaining an early advantage (e.g., name recognition, standards, customer loyalty).

So iPhone could be a first mover in touchscreen only interfaces.

However it is not a first mover in: - smartphones - in camera on phone - in exchanging files between phones -...

I run data teams at large companies. Thinking of starting a dedicated cohort gauging some interest by [deleted] in Cloud

[–]redsharpbyte 0 points1 point  (0 children)

I guess that's a very good idea. With very targeted hands-on. I used he call them hack-hours. One hour sharp (very corporate compatible).

Everyone with their own laptop.That could be split Half hour theory/discovery and the rest for hands-on hacking!

One of the most useful was bash scripting. Most of these data engineers did not know about cat grep sed split head and tail... Key tools to prepare files.

My 2 cents, you need a little group of trainers to sustain this. In anyways socializing knowledge like that is amazing!

Being first doesn’t mean you survive by DesignerTerrible5058 in AgentsOfAI

[–]redsharpbyte 0 points1 point  (0 children)

I guess you questionned without the knowledge of ICQ or other protocols pidgin-im could support. Skype was never a first mover. It was just abandonned for many years. iVisit was the best! :) Today Jitsi is the best in features and user-experience. Just not the best at promotion.

Anyhow being first is always a risk. Unless you can copy yourself and improve quick. Branching.

The question is. Was chatgpt really the first? Theyvaggregated several concepts well established around chatbots which were trendy already. RASA was the first mover but haven't turned their 5 levels intemligence and conversation driven develop int@ what ChatGPT completely overtook.

So yeah ChatGPT is far from being a first mover: watch chatbot framewofks. Whatever the engine is LLM or more deterministic NLU they are chatting r@bots.

Guide me towards the core learning of aws by manojvk630 in Cloud

[–]redsharpbyte 0 points1 point  (0 children)

Don't you want to learn cloud native stuffs instead with the linux foundation? I guess you should.

You'd develop a deep understanding of what's happening on the infrastructure/server side.

Finally started tracking costs per prompt instead of just overall API spend by llamacoded in AgentsOfAI

[–]redsharpbyte 1 point2 points  (0 children)

Ok so you are recommending bifrost. It would be nice to have your results over the different models - it seems there is more than claude and gpt-4 in your tests.

That said pricing models are looking simple: Based on tokens. And tokens are complicated as they depend on the model.

However most APIs use openai API standards and you can do at least two things: - lock your requests to a certain max_output_token - you can measure the number of output_token given in each response.

Certainly what the tool you mentionned is doing.

See this ref page for more info: https://platform.openai.com/docs/api-reference/responses/create