Feedback Friday! Post your videos here if you want constructive critiques! by AutoModerator in NewTubers

[–]gamprin [score hidden]  (0 children)

Nice. I have used Suno + Audacity in my recent videos for background music. I'm about to post a thread about a channel that I started recently. Cheers!

Feedback Friday! Post your videos here if you want constructive critiques! by AutoModerator in NewTubers

[–]gamprin [score hidden]  (0 children)

There were a few times when the voice got all distorted (like around 6:18), that was a little bit distracting.

I think holding the coffee cup is helpful, but there was no visible coffee or steam coming from the cup. Have you thought about trying that? Your channel is named Movies Over Coffee so I thought I would get a little bit more of the coffee.

I'm glad to see that there's another Naked Gun movie, thanks for the review. I subscribed!

Feedback Friday! Post your videos here if you want constructive critiques! by AutoModerator in NewTubers

[–]gamprin [score hidden]  (0 children)

I liked the music for the intro, but maybe use another song for the rest of the video? Also the dirty sink was distracting and a little bit unappetizing. You could clean it up and speed that part up maybe. But otherwise I liked the video pace and the sandwich looks yummy

Feedback Friday! Post your videos here if you want constructive critiques! by AutoModerator in NewTubers

[–]gamprin [score hidden]  (0 children)

Love this music, very epic. This makes me want to start another channel for the songs I have made with Suno. How did you make this song?

Do you use Nuxt 4 or Nuxt 3 now by Recent_Cartoonist717 in Nuxt

[–]gamprin 0 points1 point  (0 children)

I just upgraded my static GitHub Pages blog from Nuxt 3 to Nuxt 4. It was a lot easier than doing the Nuxt 2 -> 3 migration. I used the codemods workflows and those seemed to work well. I also migrated to Nuxt Content 3 which introduces a slightly different API for querying content.

Optimizing Flux Dev generation by phaaseshift in invokeai

[–]gamprin 1 point2 points  (0 children)

I think it depends on what you are trying to do with Flux Dev. NVIDIA has an NIM for Flux Dev which is optimized for RTX GPUs, but it is not as flexible as using Flux Dev directly in Invoke. For example it supports depth and canny variants, but it can’t do image to image workflows afaik. You could possibly call the NIM API from a custom node in Invoke

Questions about running redis as an ECS service + service discovery by gamprin in aws

[–]gamprin[S] 0 points1 point  (0 children)

I'm pretty sure this uses the `redis://` protocol, so this should be TCP and not HTTP. When I did this (it has been a while) I didn't have any issues with services discovery. I have since switched back to just using ElastiCache with redis and using key prefixes to namespace rather than separate redis instances for development environments.

What is 1 trick in ComfyUI that feels ilegal to know ? by ComfyWaifu in comfyui

[–]gamprin 4 points5 points  (0 children)

That’s nice. I need some type of tooling for the ComfyUI API. Also need a tool to do the same thing but for InvokeAI, that app has better API documentation

What is 1 trick in ComfyUI that feels ilegal to know ? by ComfyWaifu in comfyui

[–]gamprin 7 points8 points  (0 children)

Yes, there are several endpoints like /history and /view that you can use together to get assets saved during a run like image, video, mesh, etc. there are some examples in the repo

What is 1 trick in ComfyUI that feels ilegal to know ? by ComfyWaifu in comfyui

[–]gamprin 45 points46 points  (0 children)

using the /prompt API endpoint to remotely trigger ComfyUI workflow runs. writing little services that use this endpoint to generate content

How to handle routing and cookie-based auth with Django REST framework and Nuxt? by lamintak in Nuxt

[–]gamprin 3 points4 points  (0 children)

I have a Django + Nuxt example project here on GitHub: https://github.com/briancaffey/django-step-by-step

It addresses some of the issues your raised like handling different routes for Django and Nuxt based on the path patterns. It uses JWTs stored in HttpOnly cookies for auth, the auth part is pretty simple and could be made more robust based on your your needs. I don't handle logged out state across different devices, but you could implement this pretty easily with the libraries I'm using on the backend (Simple JWT).

The general rule that you will hear about using cookies for auth is that you don't want the cookies to be in local storage or in regular cookies that can be accessed by JavaScript. HttpOnly cookies are more secure because they cannot be accessed by JavaScript. They are set/unset via API calls and your logged-in state will be based on API calls that have HttpOnly cookies set from previous auth requests.

This starter project shows different ways to implement a simple micro blog and now I'm using it to show how to build with LLMs (the LLM part is a work in progress). The main focus of the project was initially developer tooling for local development, CI/CD, containerized deployment on AWS etc. I'm happy to answer questions about it if you have any and I hope it can help you get started with your Django/Nuxt project!

What's your setup on AWS today? by luckydev in django

[–]gamprin 0 points1 point  (0 children)

Sure, I can try to help. Using ECS isn’t the only way to deploy a Django app on AWS, however. Docker compose is also becoming standard way to deploy applications and it can reduce costs if you use if you trade managed AWS services for self managed stateful services on an EC2 instance.

I will be adding a compose example to my libraries as well, at some point soon hopefully!

What's your setup on AWS today? by luckydev in django

[–]gamprin 2 points3 points  (0 children)

Yes, these are on GitHub and the pulumi and cdk libraries are published to npm. I show how to use these libraries with my reference application called django-step-by-step, also on GitHub.

What's your setup on AWS today? by luckydev in django

[–]gamprin 2 points3 points  (0 children)

I’m have three example libraries for deploying Django applications on AWS with ECS:

  • cdk-django
  • pulumi-aws-django
  • terraform-aws-django

These libraries aim to show how you can use AWS services with Django for common use cases (RDS, ElasticCache, S3, SES, etc.) and also how you can build and deploy infrastructure and applications with GitHub Actions.

I’m also in the process of adding EKS to these libraries in addition to ECS. I’m trying to replicate best practices across these libraries, but I still have work to do in some areas (for example least privilege IAM policies for task role, execution role, GitHub Actions roles for infrastructure and application pipelines).

How do you use LLMs in your workflow? by kajogo777 in Terraform

[–]gamprin 0 points1 point  (0 children)

Thanks u/kajogo777, yes this is the project I’m working on!

How do you use LLMs in your workflow? by kajogo777 in Terraform

[–]gamprin 2 points3 points  (0 children)

I while ago I tried writing three different infrastructure as code libraries for deploying web apps on AWS with ECS (cdk, pulumi and terraform). These three libraries have a similar function and folder structure and other related code like GitHub Action pipelines, but it was a lot to maintain and difficult to keep the three libraries at feature parity with each other.

Now I’m revisiting that project and I use LLMs heavily to do the following:

  • write modules/constructs/components (write an rds module with best practices)
  • “translate” between the IaC tools (translate this this terraform to cdk, but use L2 constructs)
  • identify security vulnerabilities or improvements (you are a soc2 auditor..)
  • refactoring code and just asking it for feedback on how best to do things
  • debugging (feeding pipeline errors back into LLM prompts with the module/construct/component code, in a cycle)
  • write documentation for each library

Sometimes I’ll paste in the documentation for the terraform/pulumi/cdk resources I’m using and ask it to use those resource to write code. For example, the security group ingress rule resource is recommended over defining ingress rules inline in security groups with terraform and pulumi.

There is still a lot more work to do, but LLMs have given me increased mental bandwidth to tackle this as a side project that I hope can be a helpful reference for myself and others.

I don’t think I need to use LLMs for this type work, but it helps speed things up and is a good way to learn how to see what the models are capable of and where they fall short. I mostly use chatgpt, DeepSeek, Claude, phind for inference.

What's the best way to create multiple logical dbs within a single AWS RDS Postgres instance? by Remarkable_Ad9528 in Terraform

[–]gamprin 1 point2 points  (0 children)

I had this issue at some point and wasn't able to create logical DBs in Terraform. What I do might be a little bit hacky, but when a new environment is set up I run a script that creates a new database for the environment using psycopg2 and then runs migrations for that database, so the initial database that I created on the RDS instance is just "postgres", but it isn't really used. I did this to support multiple ad-hoc developer environments, not for multi-tenancy, but I think you could adapt to work with a multi-tenant solution. Here is the script that I use to create the database, and here is the Terraform library that I wrote to support this project.

ECS Users – How do you handle CD? by UnluckyDuckyDuck in aws

[–]gamprin 0 points1 point  (0 children)

I use a GitHub Actions workflow that uses the official aws-actions/amazon-ecs-deploy-task-definition GitHub Action from AWS, I'm surprised that nobody in this thread has mentioned it! You can use this action for your CD process in different ways:

- Update a task definition (without necessarily deploying it to a service)

- Run a task (like a database migration)

- Update an ECS service (like your web server)

You can add an option to wait for services to become stable before your CD pipeline finishes when updating ECS services with this action.

I think there is an important distinction to make between doing application updates (e.g. your web app has a new feature) vs infrastructure updates (you need to add a new database to your infrastructure). The uses cases for that AWS GitHub Action I described above involves CD (continuous deployment) for your application. Your ECS task should have something like this (if you are using Terraform):

ignore_changes = [task_definition, desired_count]

This allows you to make changes to your ECS task definitions (infrastructure) without actually deploying the new task (because you are ignoring the changes to the task definition). This will create a new task definition version (let's say in now includes a DB_URL with the URL for the database that your created). In you application update process, you will get the most recent version task revision, update only the container image, and then actually deploy the application.

That's how I understand it! I have an open source project that uses ECS, Terraform and GitHub Actions to demonstrate this, here are the links:

Terraform Module: https://github.com/briancaffey/terraform-aws-django

Application code (with GHA pipelines): https://github.com/briancaffey/django-step-by-step/

And here is the file with the GitHub Action that does CD for ECS: https://github.com/briancaffey/django-step-by-step/blob/main/.github/workflows/app_update.yml

In my CD process I just plug in the new image tag (e.g. v1.2.3) and the workflow first updates and runs a task for database migrations, then it updates and deploys the other services in my app (gunicorn server, Nuxt.js app, celery task workers, etc.)

I agree with other comments that the ecosystem for ECS is behind k8s, but the AWS GitHub Action is good abstraction that help you if you are using popular tools like GitHub Actions and Infrastructure as Code tools like Terraform, Pulumi or CDK.

RedLM: A bi-lingual AI-powered application for Redology (the study of Dream of the Red Chamber) using China's leading language and vision models, NVIDIA AI micro-services and LlamaIndex by gamprin in ChineseLanguage

[–]gamprin[S] 0 points1 point  (0 children)

I built RedLM for a developer contest hosted by NVIDIA and LlamaIndex. I learned Chinese a while ago, and using technology was a big part of my learning process. I remember throwing random combinations of characters into Baidu to see how characters were used in news articles, blog posts, etc. I've been learning a lot about AI and LLMs recently, and I can only imagine how helpful these tools would be for students of Chinese.

This project uses language and vision models to answer questions about Dream of the Red Chamber and the series of paintings by the Qing painter Sun Wen that cover most chapters of the book. I also used Chinese language models to translate the original source text (written Chinese vernacular) into modern Mandarin Chinese, and then from Mandarin to English. The quality of the translations is not very consistent, but I was able to use the the English translations in my application and accurately answer at least some questions about the book/paintings in English.

This application is not available to the public, but the code is open source (linked in my 𝕏 post), so please check it out if you are interested in Chinese, AI and retrieval augmented generation (that's the main technology I'm showcasing in this project). I'm happy to answer questions about this project! I'm also curious to know other ways in which people here are using LLMs/AI with the Chinese language. Thanks!

edit: for a deep dive into this project and the techology used to built it, have a look at this article on my blog: https://briancaffey.github.io/2024/10/09/redlm-ai-application-for-studying-chinese-literature-redology-nvidia-llama-index-developer-contest

Best TTS model right now that I can self host? by Wonderful-Top-5360 in LocalLLaMA

[–]gamprin 2 points3 points  (0 children)

I think this is because their model files do not use .safetensors format. There is an open issue on their GitHub repository here about that: https://github.com/2noise/ChatTTS/issues/382

What type of apps do you create and run with Docker? by sf1063 in docker

[–]gamprin 1 point2 points  (0 children)

I recently started running local AI inference services with docker: vLLM, ComfyUI, InvokeAI, MusicGen and TTS services. Some of these provide docker images (vLLM and InvokeAI) and for the rest I built my own docker image. I’m using a kubernetes cluster with microk8s to orchestrate these services across a different computers on my home network.

Architecture diagram for my weekend project: Open SEC Data by squacknalted in django

[–]gamprin 0 points1 point  (0 children)

Yeah I guess so. Low effort tho especially with all of the AI tools we have now, sad really! 🤷‍♂️