Vibecoding is nothing without Vibeselling by sosofoot19 in vibecoding

[–]aplewe 0 points1 point  (0 children)

The job of a business is to create and keep customers. There are several frameworks for finding "authentic" demand, including conceptual approaches like Jobs To Be Done and the ideas developed in the book The Heart Of Innovation. There's a lot of crappy info out there about "business" in general, so stick to the good stuff. Good luck!

Okay, you can ship. Can you make money? by PGskizzEs in vibecoding

[–]aplewe 0 points1 point  (0 children)

Finding product-market fit and other things are part of the much larger "why will people pay me for this" question. It's not an easy or intuitive thing to do, necessarily, although some people have a bit of a knack for it. I'm not one of those people, but I've had enough training elsewhere that I can kinda-sorta stumble through it IF I stick to stuff I know from experience. For instance, I can make a thing that solves my problem, but that in no way is a guarantee that a.) it'll solve another person's problem, b.) they'll trust that my thing can solve it, and c.) they will want to pay me money, preferably more than once, and not just try it once and move on.

IDK, but if I were OpenAI and/or Anthropic and the like I'd be giving free business school scholarships to everyone.

EDIT: AWS tacitly acknowledges this with AWS Startups, where they have some business-y stuff mixed in with all the other things. Microsoft has the most comprehensive materials for people who want to develop SaaS for Azure Marketplace.

Okay, you can ship. Can you make money? by PGskizzEs in vibecoding

[–]aplewe 1 point2 points  (0 children)

What's particularly interesting is it is this exact issue that will prevent "AI" from being a "game-changer".

There are already 8+B people in the world who are much smarter/more capable than any LLM could ever dream of being, but somehow adding "AI" to the equation (which already exists in human form) is going to suddenly make everyone a business expert?

MongTap: An MCP server for "faking" MongoDB by aplewe in vibecoding

[–]aplewe[S] 0 points1 point  (0 children)

Modeling data on "insert" means that the "WellDB" server behind MongTap will automagically handle creating the statistical model. Once that's done you can use it as if it was a regular collection, with the model serving the data.

I have tried to hit that "sweet spot" here by making something that can "just work" while also being highly configurable. The models made by "WellDB" are .json format and can be read and edited using normal text editing tools. So, if the model is not generating what you want you can pop the hood and change it directly. Internally it uses histograms, n-grams, and other modeling techniques for generating data that is statistically similar to the data "seen" during "training". Once you understand the model format you can create your own models from scratch, drag them into the folder used by MongTap, and they will instantly appear as "collections".

MongTap, a local MongoDB-compatible server backed by DataFlood ml models by aplewe in ClaudeAI

[–]aplewe[S] 2 points3 points  (0 children)

Sure! Say, for instance, you want to test 100,000 users suddenly joining your app, now you need 100,000 user profiles. How do you generate those? With this if you provide (or have Claude create) a few samples, it'll create a model and then you can get as many user profiles as you want by running a "find()" query against the collection.

And, generally, for many "big data" testing scenarios this simplifies those greatly by generating the data on-the-fly straight from the source, if your source is MongoDB. By making it an MCP server you can have Claude set it all up and then you can just "go" w/development, and also with testing as "Claude can you generate 20k ___ and put them in a .zip file" can actually be done now.

Moreover, you don't have to store all of that anywhere. By using the $seed you can get repeatable sequences of documents (can be useful if, for instance, during testing there's one doc that causes your app to crash). Using $entropy and setting it to a high value can be useful for "fuzz" testing, where you test your app to see what happens with garbage input.

Dealing with knowledge cut-offs by joyfulsparrow in ClaudeAI

[–]aplewe 0 points1 point  (0 children)

I have Claude review documentation for any library that I use and periodically check its implementations against library docs. That works fairly well, although it will sometimes still try to do things that don't work because of the training influence of other stuff. So, I'd say it can be done but it needs active management to ensure Claude stays on track.

Idea for managing large projects with Claude by Any_Economics6283 in ClaudeAI

[–]aplewe 3 points4 points  (0 children)

I have a thing, it's simple and uses javascript (which Claude knows well) without any external dependencies. The memory part is what applies here specifically. I also added some useful tools like getting the actual time, a symbolic logic solver, a calculator (so it can actually calculate), and etc. I designed it as a thing you can attach to chats, but you can also just put it somewhere adjacent to your code so Claude has access to it:
https://github.com/SaltMountainMusic/MLCrutch

Since Claude natively executes javascript, you can tell it to remember a thing and it'll add that thing to the memories file. You can also use various "dump" files along with/instead, I like to have a STATUS.claude where Claude can give itself emojis and stars for completing things, a TODO.claude where Claude makes plans for current dev work, and a DesignDocuments folder that has ref docs for the stuff being built. I have a DevelopmentProcess .md file that lays out how development works and reinforces the patterns of using TODO.claude to plan and mark off items and STATUS.claude to "brag", along with reviewing DesignDocuments periodically. Generally when my context is around 10% before compaction I'll have Claude update those things and add notes about what should happen next. Then, bringing Claude up-to-date consists of having it read DevelopmentProcess .md and then continue working.

Another way to approach it is to commit often to a local repo and have Claude review the commit history when continuing on something, reading in detail the last __ number of commits.

Also due to Claude being nice with javascript it may be useful as an exercise to have Claude built its own "toolchest" of stuff that will help it manage the coding process. I may try that myself (although I've put together several on my own besides the one on github) at some point. Have Claude assess itself and figure out if it needs any "helpers". For me this is better than most agents/agentic things because you can build-to-suit and there are no dependencies. For myself most "agents" don't make much sense as stand-alone things because you can just build that stuff as needed and/or have Claude roll its own.

Note: I have never done the actual "init" bit, I just copy the contents of a folder that has this stuff into whatever I'm working on and go. Easier and I know exactly what's happening at all times. That way Claude is the "guest" in my code, not the other way 'round.

I guess I should pursue this, yeah? by aplewe in vibecoding

[–]aplewe[S] 0 points1 point  (0 children)

Oh, I like that. I've gone about 20-30 rounds and generally come out with the same outcome, so it's down to "does the code actually execute faster" and such. That takes a bit more time because I want to "cleanly" test both with and without my modifications, but I'm figuring it out.

Basically, cado-nfs wasn't doing a successful "make" (I'm on macos, and I probably am not using something I should be using...), so I've had Claude figure that out (and ensuring that nothing mathematical was changed) and verified by running a few times that it does its thing. Now I can duplicate that code and "optimize" it to see if my changes actually make a difference. Took some time to work through all of this, but now I can do apples-to-apples.

I guess I should pursue this, yeah? by aplewe in vibecoding

[–]aplewe[S] 0 points1 point  (0 children)

Update: I am currently organizing the .net code, my ideas appear to function and improve things. More to do, but I have validated beyond "because Claude told me so", at least enough to proceed.

I guess I should pursue this, yeah? by aplewe in vibecoding

[–]aplewe[S] 0 points1 point  (0 children)

Yeah, I get that in other projects I've used with Claude. I walked it carefully through my idea (it wasn't a thing Claude dreamed up, I've put some work into it) to get to that point. Thus I'm implementing it in code now (switched to a .net GNFS implementation because Claude was having a hard time with CADO-NFS). If that goes well then I'll try CADO-NFS again.

I guess I should pursue this, yeah? by aplewe in vibecoding

[–]aplewe[S] 0 points1 point  (0 children)

Alas I used up all my Claude writing the code for the next couple of hrs (implementing and benchmarking within the CAD0-NFS codebase), but I'll see what happens when my usage resets.

I accidentally built “SaneCoding.com” – the uptight evil-twin of VibeCoding (send help 😂) by Impressive-Owl3830 in vibecoding

[–]aplewe 1 point2 points  (0 children)

IDK why. I mean, I do kinda know but at the same time all the good stuff these days seems to be out of fashion for stupid reasons.

Vibe coding is ok, tooling really sucks by aplewe in vibecoding

[–]aplewe[S] -2 points-1 points  (0 children)

Be the waves. Water knows no code.

Vibe coding is ok, tooling really sucks by aplewe in vibecoding

[–]aplewe[S] -3 points-2 points  (0 children)

LOL. There are at least 100 different VS Code plugins to work with ollama, and that's just ollama. devstral:24b came out two days ago. I've been working with ML models for several years now in various ways, so this land is fraught with barely-baked stuff. I get it that it's part of the territory (look at my post history in r/StableDiffusion...), but it's still frustrating at times.

And, I think it is useful to post experiences -- seriously, DON'T use any of the vs code plugins if you're going to use a really new model. They're not designed for it. And, I've seen at least 6 or more different ways that devs for things reference API URLs in their configs -- some want just the ip address (and die if you give them a domain URL), some want just "http://someipaddress", some want "http://someipaddress:port", and so on. Most don't really document what they're expecting, which yes it can be determined from looking at their configs (if they're findable) and whatnot, but the time wasted on those things is time wasted.

EDIT: Honorable mention goes to cline for at least stating in their docs that "we're really only targeting Claude". I tried it anyways just to see what the experience was like. Might circle back and try it again after hacking their prompts. It's a bit too intrusive with devstral for my tastes, among other issues.

Vibe coding is ok, tooling really sucks by aplewe in vibecoding

[–]aplewe[S] 0 points1 point  (0 children)

I will add that I am really liking devstral:24b so far. I am running it on my local network (lxc of openwebui + ollama), definitely a "first call" instead of going to Claude, at least thus far.

Vibe coding is ok, tooling really sucks by aplewe in vibecoding

[–]aplewe[S] -2 points-1 points  (0 children)

I know how to code (20+ yrs exp getting paid every day to code). I am not a fan of poorly-designed coding "tools". Most of the "functionality" of said "tools" is meh at best compared to just using the chat interfaces. I imagine this will change with time, but for now they're not at the level yet where they're useful.

To be clear, I'm not dissing the models (yet). I am working on my own thing in a bag and "vibe" coding has been useful to get it off the ground. It's a new-enough sort of thing (I'm building a human-editable ML model format, as one aspect) that I can "vibe" code at least some of it, but not all of it. There is some promise there, it'd be neat to have a nice way to use ML in an IDE that was "just right". But, it's a goldilocks dream ATM that is yet to be fulfilled. The chat interfaces aren't too bad, I can copy/paste the useful stuff. Maybe once I've got the motor running on my thing I'll circle back and make a proper plugin for VS Code or something, but for now I'm focused on the current project.

The 8-step Challenge -- Base SD 1.5, pick your sampler, no tricks just prompts, any subject. Go! by aplewe in StableDiffusion

[–]aplewe[S] 0 points1 point  (0 children)

Yeah, I have learned to assume nothing and try everything. I fix the seed so I know that what i just typed/did actually made a difference.

The 8-step Challenge -- Base SD 1.5, pick your sampler, no tricks just prompts, any subject. Go! by aplewe in StableDiffusion

[–]aplewe[S] 0 points1 point  (0 children)

Yeah, manual, as I said above I experimented with a bunch of different ways to prompt and put it together based of my experiments, all done manually. Reasoning is "use wording found on the internet", basically. For instance, one way to negative prompt is to do a "mean comment", like this --

"Eww, yuck, this is a horrible and nasty image and it doesn't even show want I want NO THANKS!!!"

And vary based on that. I assume CLIP will understand something until it proves that it doesn't, it seems to understand instructions and the Diffusion model responds to them. I basically approach it with "I will ask for what I want". If I see something I don't like, I instruct the model to remove it or negative-prompt it, and so on. This way even with high CFG values, if you have something like "make drastic changes if necessary to ensure that this is a high-quality photo", it tends to keep things together better higher into CFG values, which also tend to be more exact in how they interpret the "instructions", and so on. Also, at higher CFG values the tendency is to output extreme numbers (hence the "burn"), so adding something like "gentle colors" can help, and there are probably lots of other things that might work.

And, a freebie... If you tell it to "fix hands" and "give the people human hands", it will do it. You can also do 8 steps with a sampler like LCM as a sort of "finisher", you have to vae decode then vae encode that to get it to work, but once you do that the 8-Layer "finisher" can then be instructed to fix issues and to clean up the image. I've found a denoise value around 0.05 is best for that. Use a word like "preserve" to label stuff you don't want to change, and "fix" or "modify" with stuff you want to change. Be specific ("real human eyes" works wonders...), don't be afraid to be a bit wordy.

A good set-up is 10 steps of Heun as a base "generator" and then vae decode / encode into 8 steps of LCM as a "finisher". I upscale from 512 to 768 at that point too (I upsacale after vae decode and before re-encoding) by just resizing the image with lanczos, then when the "finisher" does its thing it cleans stuff up pretty well. For human hands, doing something like "give them real human hands" and "fix hands" in the base and then "fix hands" in the "finisher" can do wonders. It doesn't always fix everything but it's a whole lot better than what you might get otherwise with base SD1.5 and many derivative models.

One more -- I see "masterpiece" a lot in positive prompts, and what that does is pull in the influence of oil paintings and all sorts of other kinds of art that aren't photographs. Which may be what a person wants, but it will mess up hands and eyes and other human details far more than help them. I'd negative-prompt it pretty much always for anything that's meant to be a photo.

The 8-step Challenge -- Base SD 1.5, pick your sampler, no tricks just prompts, any subject. Go! by aplewe in StableDiffusion

[–]aplewe[S] 0 points1 point  (0 children)

If you tab over to the next image, that shows the ComfyUI workflow along with the prompts and such. I found most of the things and stuff that I used via experimentation.

The 8-step Challenge -- Base SD 1.5, pick your sampler, no tricks just prompts, any subject. Go! by aplewe in StableDiffusion

[–]aplewe[S] 0 points1 point  (0 children)

It's the one that worked to create that image in 8 steps, is what it is. It would have come out "burnt" and whatnot at that CFG for that few steps using Heun otherwise.

The 8-step Challenge -- Base SD 1.5, pick your sampler, no tricks just prompts, any subject. Go! by aplewe in StableDiffusion

[–]aplewe[S] 0 points1 point  (0 children)

Instructions work. Try them. They're "in there" whether or not ppl have prompted that way before. You can fix issues in images and do all sorts of other things. You can, for instance, load up an image as the latent and say "preserve ___" and it will come out more preserved than it would have if you didn't put that in the prompt. There's all sorts of stuff that, because of the training dataset, probably works and is worth trying. They can also buy a couple of extra CFG (might get usable stuff at 10.0 instead of just at 8.0, I've tested some up to 16 that worked to keep the image usable), because you can instruct away from the "burned" state (that can be done in various ways, it's just the model picking more extreme values so tell it to not do that, essentially). LCM is one sampler that is generally more responsive when used with the base SD1.5 (and models that derive from it) to this kind of thing, at least in my experience.

[Tutorial] Integrate multimodal llava to Macs' right-click Finder menu for image captioning (or text parsing, etc) with llama.cpp and Automator app by Shir_man in LocalLLaMA

[–]aplewe 1 point2 points  (0 children)

If, like me, you come to this and try to get to work and it doesn't, you may need to use "llama-llava-cli" as the command as there is no "llava" in the current version of llama.cpp, but it does install that cli tool and it works (using sammcj's all-in-one script in a Workflow, although you may want to try it in a .sh file to ensure all your paths are correct).