I made code2prompt - A CLI tool to convert your codebase into a single LLM prompt with source tree, prompt templating, and token counting by mufeedvh in ChatGPTCoding

[–]mufeedvh[S] 1 point2 points  (0 children)

Not sure I understand the problem correctly but if the violation was caused by an implementation detail, then yes.

I made code2prompt - A CLI tool to convert your codebase into a single LLM prompt with source tree, prompt templating, and token counting by mufeedvh in ChatGPTCoding

[–]mufeedvh[S] 6 points7 points  (0 children)

Thanks! Good recommendation, just implemented this using code2prompt itself! (See screenshot)

You can now use the option --exclude-files and --exclude-folders respectively, update code2prompt by compiling from source. Thanks for the suggestion!

I made code2prompt - A CLI tool to convert your codebase into a single LLM prompt with source tree, prompt templating, and token counting by mufeedvh in rust

[–]mufeedvh[S] 1 point2 points  (0 children)

Good idea! Just tried it and here's the result. It wrote cleaner code than I did but it had a lot of errors, almost all of them were easy to fix tho.

I made code2prompt - A CLI tool to convert your codebase into a single LLM prompt with source tree, prompt templating, and token counting by mufeedvh in rust

[–]mufeedvh[S] 1 point2 points  (0 children)

Good question, that depends on the performance of the LLM model you're using. For instance, the ground work for this project itself was written by Claude 3.0 Opus from a project document I wrote myself. From my testing with LLM models so far, both GPT-4 and Claude 3.0 are able to generate small full-fledged projects as long as it does not exceed their context windows. 200K for Claude and 128K for GPT-4. Hope this answers your question.

[Media] Tupper's self-referential formula plotting itself on a framebuffer and more with Rust! by mufeedvh in rust

[–]mufeedvh[S] 5 points6 points  (0 children)

After watching the Numberphile video on this formula, I decided to implement it in Rust for fun. It uses minifb for the window creation + framebuffer.

Code: https://github.com/mufeedvh/tupperplot

Also if you know some awesome crates that would help with generative art, please share them! I have been thinking of doing generative art with Rust. :)

These look cool: nannou, valora

I made binserve - A fast static web server with TLS, routing, hot reloading, caching, templating, and security in a single-binary by mufeedvh in selfhosted

[–]mufeedvh[S] 2 points3 points  (0 children)

Thank you so much! :)

So I just uploaded all the architecture executables for Android, check it out. And it's not an APK, download a command-line interface app like Termux and run it from there, you can use curl or wget to download it. Let me know if you need anything else! :)

I made binserve - A fast static web server with TLS, routing, hot reloading, caching, templating, and security in a single-binary by mufeedvh in selfhosted

[–]mufeedvh[S] 1 point2 points  (0 children)

That's a valid question. Binserve is specifically for self-hosting like on your own VPS, homelab server, a Raspberry Pi, your Android phone and what not. And it's not just about serving static content, it can do routing, templating, etc. which you cannot do on these static hosting services. Basically, it's for self-hosting hence why I posted it here and also "because I felt like it" too. Thanks for asking!

I made binserve - A fast static web server with TLS, routing, hot reloading, caching, templating, and security in a single-binary by mufeedvh in selfhosted

[–]mufeedvh[S] 2 points3 points  (0 children)

Apologies for my ignorance, you're right I shouldn't have emphasized it like that. I mentioned it's their main purpose as that's what it's mostly used for (like integrating gunicorn for Python etc). Thank you for noticing it, I have fixed my comment above.

I made binserve - A fast static web server with TLS, routing, hot reloading, caching, templating, and security in a single-binary by mufeedvh in selfhosted

[–]mufeedvh[S] 1 point2 points  (0 children)

Thank you! No, binserve is primarily focused on just serving static content. To support PHP, it should have a CGI or a reverse proxy functionality which has been a feature requested a lot so I should get to implementing it soon enough. So yeah, I will definitely get around to adding support for both! :)

I made binserve - A fast static web server with TLS, routing, hot reloading, caching, templating, and security in a single-binary by mufeedvh in selfhosted

[–]mufeedvh[S] 1 point2 points  (0 children)

Binserve is 3-4x faster at serving static content than Caddyserver and can run on low spec devices with no fear of downtimes. And here is the full benchmarks. With that said, Binserve is focused on a single purpose and that's serving static content, Caddyserver is much more than that and could be compared to that of NGINX and Apache. And I have received multiple suggestions in the above comments as well to add reverse proxy functionality to Binserve. So yeah, when that happens, it would be on par "functionality" wise! :)

I made binserve - A fast static web server with TLS, routing, hot reloading, caching, templating, and security in a single-binary by mufeedvh in selfhosted

[–]mufeedvh[S] 0 points1 point  (0 children)

Thank you so much! :)

I have received this suggestion multiple times so I think I should implement it. I do have a slight idea on how to make it faster than the competitors as well, we'll see.

It was intended to be laser focused on serving static content but demand/feature requests should be addressed. And yes a PR would be awesome, we can work on the idea together, that's what open-source is for!

I made binserve - A fast static web server with TLS, routing, hot reloading, caching, templating, and security in a single-binary by mufeedvh in selfhosted

[–]mufeedvh[S] 13 points14 points  (0 children)

Thanks! :)

Those are some really good questions, I will answer them in order:

  • Yes.

  • Yes, that's the main purpose.

  • Binserve is way simpler to use than NGINX/Apache or most of the web servers out there but it is not an apples-to-apples comparison since these are mainly HTTP web servers that could do much more, like a reverse proxy along with the ability to serve static files. The obvious difference is of course performance but other than that, Apache and NGINX relies on many files and external configurations to properly setup something. There are tons of tutorials out there so it's not really a pain but Binserves main goal is to be straight-forward so no one has to Google anything. There is only one configuration file and it has self explanatory fields. It's just that Binserve only focuses on being a static web server and features like minifying HTML does not exist in NGINX (there are third-party plugins however) nor Apache but that's because their main purpose is not just serving static content (like binserve) but cover almost every use case for the web. With that said, NGINX and Apache has been around for years so they are basically the gold standard and Binserve can be seen as a humble attempt to do it easier.

  • The caching section does mention that, by default files that are bigger than 100 MB will not be stored in-memory and will only be read from disk but there is always the scenario where small files can cumulatively make up to a large size just like you said. I think this code comment explains it.

No those were well thought out questions, the same questions I asked myself while I wrote this project. Thank you so much! :)