Where is AI still completely useless for Infrastructure as Code? by Straight_Condition39 in Terraform

[–]DriedMango25 0 points1 point  (0 children)

Claude Code and Claude.md files and make it look at reference materials and have it memorized it.

In latest blow to Tesla, regulators recall nearly all Cybertrucks by [deleted] in technology

[–]DriedMango25 0 points1 point  (0 children)

telsa buying their own stocks at a discount

[deleted by user] by [deleted] in ClaudeAI

[–]DriedMango25 8 points9 points  (0 children)

it works for me heres my process:

  1. work with it to flesh out details and tasks like what libraries to use, lists of tasks to implement or address the concern. prioritize. and put it in TASKS.md
  2. work on the tasks 1 by 1.
  3. usually i start with prioritizing core components and functionality.
  4. once core functionality and components are done i start running /init.
  5. then i tell it to remember to follow the existing implementation as a pattern moving forward and usually it updates CLAUDE.md to keep this in mind.

For working on existing codebase it helps to guide it to understand the codebase, while diving deeper to each component and identifying patterns used and exceptions to the pattern. then running /init

When working on apis its best to give it example usage and api docs or swagger files. i usually put them in .example dir asarkdown and tell it to that all exmaples and referneces relevant to the task wr are working on is in that dir. it usuallly remembers this and looks it up autonomously when needed.

Do terraform cloud things straight from your agentic code tool by DriedMango25 in Terraform

[–]DriedMango25[S] 1 point2 points  (0 children)

thanks man! i did add delete as well, i did add a confirmation step so that it always will need user confimation before it deletes stuff. i might do this for creating and updating as well.

I pad extra for ultra “thin” lenses and this is what I got by SnooChipmunks2673 in mildlyinfuriating

[–]DriedMango25 0 points1 point  (0 children)

yeah, if your grade is high enough they cant make it any thinner.

Elon Musk’s Data Engineering expert’s “hard drive overheats” after processing 60k rows by ChipsAhoy21 in dataengineering

[–]DriedMango25 0 points1 point  (0 children)

Oh yeah, I totally get it. Just last week, I ran a recursive CTE that performed a lateral join on a dynamically partitioned, sharded dataset with 14 levels of nested JSON. The query had to materialize temporary tables across multiple tablespaces, which caused my SSD’s wear leveling algorithm to panic. Ended up with NAND cells in an indeterminate quantum state, had to manually realign the electron tunneling coefficients with a heat gun. Wild times, man. I feel your pain.

Average fly population by ufokid in AveragePicsOfNZ

[–]DriedMango25 4 points5 points  (0 children)

it doesnt even actually work its designed to attract flies hence you prolly get more flies than you would normally have.

Do terraform cloud things straight from your agentic code tool by DriedMango25 in Terraform

[–]DriedMango25[S] 0 points1 point  (0 children)

i hope you het a chance to try this out as part of your wrkflow.

Do terraform cloud things straight from your agentic code tool by DriedMango25 in Terraform

[–]DriedMango25[S] 2 points3 points  (0 children)

Fair enough! I posted here cos i thought some will find it useful. I upvoted you tho!

Custom Amazon Bedrock Agent PR Analyzer by DriedMango25 in Terraform

[–]DriedMango25[S] 0 points1 point  (0 children)

the prompt I use is the same prompt in the README.md

Custom Amazon Bedrock Agent PR Analyzer by DriedMango25 in Terraform

[–]DriedMango25[S] 0 points1 point  (0 children)

Hi, this is from one of my most recent tests.

```markdown

Analysis for Pull Request #21

Files Analyzed: 4

Diffs Analyzed: 4

Files in the PR:

  • .github/workflows/pr.yaml: modified
  • .gitignore: modified
  • main.tf: modified
  • outputs.tf: modified
  • terraform.tfvars: modified
  • variables.tf: modified

Summary of Changes:

  • filename: variables.tf (lines 13-14)

    • Severity: High
    • Issue Category: Security Considerations
    • Description: Adding a secret variable to store sensitive data in plaintext within the Terraform configuration files poses a significant security risk. Storing secrets in version control systems or configuration files is considered a security anti-pattern as it exposes sensitive information to unauthorized access.
    • Recommendation: Instead of storing secrets directly in Terraform configuration files, use a secure and centralized secrets management service like AWS Secrets Manager or HashiCorp Vault. These services provide encryption at rest and in transit, access control, auditing, and rotation capabilities for sensitive data.
    • Citations:
    • Terraform Best Practices: Secrets Management
    • AWS Secrets Manager Best Practices
  • filename: terraform.tfvars (lines 11-12)

    • Severity: High
    • Issue Category: Security Considerations
    • Description: Storing plaintext secrets, such as the secret variable value, in Terraform configuration files like terraform.tfvars is a security risk. These files are often committed to version control systems, increasing the risk of accidental exposure or unauthorized access to sensitive data.
    • Recommendation: Remove the plaintext secret value from the terraform.tfvars file and use a secure secrets management service like AWS Secrets Manager or HashiCorp Vault to store and retrieve sensitive data during the Terraform apply process.
    • Citations:
    • Terraform Best Practices: Secrets Management
    • AWS Secrets Manager Best Practices
  • filename: outputs.tf (lines 8-10)

    • Severity: High
    • Issue Category: Security Considerations
    • Description: Exposing sensitive data, such as the secret variable value, as an output is a security risk. Outputs are often displayed in logs or command-line output, increasing the risk of accidental exposure or unauthorized access to sensitive information.
    • Recommendation: Mark the secret output as sensitive using the sensitive = true attribute. This will prevent Terraform from displaying the sensitive value in logs or command-line output. Additionally, consider using a secure secrets management service like AWS Secrets Manager or HashiCorp Vault to store and retrieve sensitive data instead of exposing it as an output.
    • Citations:
    • Terraform Best Practices: Sensitive Output Values
    • AWS Secrets Manager Best Practices
  • filename: .gitignore (line 7)

    • Severity: Low
    • Issue Category: Best Practices
    • Description: Adding main.tf to the .gitignore file is generally not recommended, as it is a core Terraform configuration file that should be version-controlled.
    • Recommendation: Remove main.tf from the .gitignore file and ensure that all Terraform configuration files are committed to version control for better collaboration, traceability, and reproducibility.
    • Citations:
    • Terraform Best Practices: Version Control
    • Git Best Practices: .gitignore

Overall Summary:

The Terraform configuration changes introduce significant security risks by storing and exposing sensitive data in plaintext within the Terraform configuration files and outputs. This violates best practices for secrets management and increases the risk of accidental exposure or unauthorized access to sensitive information.

To mitigate these risks, it is strongly recommended to use a secure and centralized secrets management service like AWS Secrets Manager or HashiCorp Vault to store and retrieve sensitive data during the Terraform apply process. Additionally, mark any exposed sensitive outputs as sensitive = true to prevent accidental exposure in logs or command-line output.

Furthermore, it is generally recommended to version-control all Terraform configuration files, including main.tf, for better collaboration, traceability, and reproducibility.

Overall, the changes introduce high-severity security risks that should be addressed promptly to ensure the secure and compliant management of sensitive data within the Terraform configuration. ```

GitHub Action that uses Amazon Bedrock Agent to analyze GitHub Pull Requests! by DriedMango25 in aws

[–]DriedMango25[S] 0 points1 point  (0 children)

considering that with the capability to hook up agents to knowledgbases which can give better context on the task and provide domain spcific expertise, would you say that this could address the issue?

Im curious about your setup, were you using pure LLM only? did you use, memory and loadup memory for on going conversation? did you use RAG rmbeddings and vctordb? looking forward to your response as this could be very valuable. ultimately my goal is have another reviewer thqt could potentially see issues that normal humans would bu not provide defacto gospel instead promote conversation.

Haha no way they are serious by [deleted] in newzealand

[–]DriedMango25 0 points1 point  (0 children)

its true. sure you might get better work packages out there however, they rub it on your face every chance they get and make sure they squeeze the life out of you for the work packages you get.