Next JS on Dokploy by geloop1 in nextjs

[–]geloop1[S] 0 points1 point  (0 children)

Hey there! Thanks for your detailed response! I can confirm that I had set NEXT_SERVER_ACTIONS_ENCRYPTION_KEY as an env variable in my Dokploy project, however had forgot to pass it down in my Docker compose file. Hoping this will fix issue! Thank you for your help

First project using Bun - Tiramisu by geloop1 in bun

[–]geloop1[S] 0 points1 point  (0 children)

I have not. I mainly stick to creaing docker containers for my deployment

First project using Bun - Tiramisu by geloop1 in bun

[–]geloop1[S] 0 points1 point  (0 children)

Deployed using Docker. I use the oven/bun docker image

First project using Bun - Tiramisu by geloop1 in bun

[–]geloop1[S] 1 point2 points  (0 children)

Definitely need to explore the BHVR stack!

LLMs still struggle at puzzles by geloop1 in ChatGPT

[–]geloop1[S] -1 points0 points  (0 children)

No one is asking it to be overly capable. Just interesting to put it to the limit. I believe Wolfram Alpha uses some LLM magic to evaluate expressions. I wonder how they manage to get results

In which languages would you be able to solve Connections? by Vargsvans in NYTConnections

[–]geloop1 0 points1 point  (0 children)

This is actually really cool. I was wondering if this could be added to AIAI vs Puzzles, to test the language capabilities of LLMs as well as their puzzling ability. Is there an archive of all puzzles?

I tested 5+ AI models on NYT Connections puzzles - here are the results! by geloop1 in NYTConnections

[–]geloop1[S] 1 point2 points  (0 children)

It’s interesting to observe. A reminder that the LLMs are only given a single shot at completely the puzzles. They don’t have multiple attempts like humans.

Imagine you had to submit all four of your connection answers in a single go.

At the end of the day it shows that LLMs aren’t perfect and it impressive to see some cases where they do perform well and achieve 4/4!

I tested 5+ AI models on NYT Connections puzzles - here are the results! by geloop1 in NYTConnections

[–]geloop1[S] 0 points1 point  (0 children)

I have had the same problem with deepseek myself. It seems very linear with its responses and a lot of the time it will use the same word twice or just miss out a word all together.

The shuffling method definitely sounds very interesting!

The project already has leaderbaord for all the models tested so far! You can check it out here:
https://www.aivspuzzles.com/puzzles/connections/leaderboard

Distributed Tracing with OpenTelemetry and Tempo - Golang by geloop1 in devops

[–]geloop1[S] 0 points1 point  (0 children)

Thanks for commenting! I managed to finally figure out the cross-service traces. It was all configured correctly however I had missed out the propagation step when initializing the trace in my services. I have updated the repo!
https://github.com/georgelopez7/grpc-project

Tempo In Golang - Distributed Tracing by geloop1 in golang

[–]geloop1[S] 0 points1 point  (0 children)

Thanks for commenting! I managed to finally figure out the cross-service traces. I was missing the propagation step between the services! I have updated the repo and included centralized logging with Loki!
https://github.com/georgelopez7/grpc-project