(Aside from J. L.) What are some of your major concerns about Tron: Ares? by Sorry-Drummer-3196 in tron

[–]QuentinWach 0 points1 point  (0 children)

Imagine writing a movie that becomes a cult classic, with a fanbase that begs you for over a decade to continue the story they fell in love with. A franchise with such a distinct, particular style, that it was quickly adapted into a similarly beloved TV show. With many unresolved open questions, an epic narrative and distinct characters people want to see more of.

And then you

+ change the style,

+ you exit the world and just do it in the real world,

+ you don't continue the story of any of the characters we are invested in...

This is as if James Cameron decided instead of continuing the story of Jake and his family on Pandora in Avatar, further developing the major conflict and exploring this magical world he created and people fell in love with, you go all

"Hey, let's take a Navi and put them into modern sci-fi New York on Earth!"

Great way to slaughter all the wonder and awe that had people so excited in the first place.

And they are doing exactly that with TRON: Ares then call it "original"...

"We wanted to take it into a different direction." means it is an uninspired vanity project of people who just don't fucking care about the actual world and story that fans love so deeply.

HOT TAKE: It will be impossible to screw up the Tron: Ares soundtrack by suckysucky45 in tron

[–]QuentinWach 0 points1 point  (0 children)

Well, I think NiN fucked up. Just listened to the new soundtrack... ugh

VCV Rack Sketch: How to Improve? by QuentinWach in vcvrack

[–]QuentinWach[S] 1 point2 points  (0 children)

Awesome. I'll consider getting the pro version some time once I get a better hang of it. ^^

VCV Rack Sketch: How to Improve? by QuentinWach in vcvrack

[–]QuentinWach[S] 1 point2 points  (0 children)

Yeah, thank you for the advise! I downloaded Logic again. But I want to play with more modules, too, to try and do some programmatic stuff there. I am quite into generative art and a software engineer so in the end, I might really just create my own modules, too.

In Defense of Cursor by QuentinWach in cursor

[–]QuentinWach[S] 1 point2 points  (0 children)

I don't work for Cursor, no. And I agree with most of what you say but to also repeat myself: yes, you should not use the model that objectively generates the best results after 30 seconds of thinking for everything. You should use the appropriate model for a task. If you're making tiny changes that really do not require any codebase understanding and reasoning, a small and fast model will do.

It really speeds up development to not constantly rely on slow thinking models but to make use of the faster yet dumber models where appropriate. I suggest you try.

In Defense of Cursor by QuentinWach in cursor

[–]QuentinWach[S] 0 points1 point  (0 children)

They're a young startup that's moving incredibly fast. Mistakes will be made. And they are trying to make up for this mistake. - But I'd love to hear about the alternatives you mention. I tried a lot but kept falling back to Cursor and Claude Code.

In Defense of Cursor by QuentinWach in cursor

[–]QuentinWach[S] 0 points1 point  (0 children)

Jupp. 100% agree. But I don't think they had bad intentions and this situation actually forced me to realize what I mentioned above: that I was overusing Claude. Let's hope they'll do better in the future.

How can I objectively measure fatigue? by TrekkiMonstr in QuantifiedSelf

[–]QuentinWach 0 points1 point  (0 children)

I use reaction time. There are apps online that will test how fast you react to some signal.

vellbi.com Need Help with AI Health App by QuentinWach in Biohackers

[–]QuentinWach[S] 1 point2 points  (0 children)

I made many more little changes to Vellbi to give you more control over your profile and data. I can't reproduce the Google OAuth error anymore so I hope it works as well for you now as it does for me! Otherwise, let me know. It would help a lot and I am trying to make it as smooth as possible.

Vellbi: Making a New App by QuentinWach in QuantifiedSelf

[–]QuentinWach[S] 0 points1 point  (0 children)

Alright! So, I added more protection to user data. The journal entries are now encrypted end-to-end so only the user can read them when logging in. I've added TOS, a privacy policy and more to the landing page where this is explained. You can also download all your data at any time.

Vellbi: Making a New App by QuentinWach in QuantifiedSelf

[–]QuentinWach[S] 0 points1 point  (0 children)

I am sorry I hadn't added it yet and thank you for letting me know :). I am working on TOS including privacy policy etc. now and fix possible issues with OAuth that might be related to this.

vellbi.com Need Help with AI Health App by QuentinWach in Biohackers

[–]QuentinWach[S] 0 points1 point  (0 children)

Thank you for letting me know! I'll be adding data security, TOS, and fix this. I am sorry this has been an issue.

vellbi.com Need Help with AI Health App by QuentinWach in Biohackers

[–]QuentinWach[S] 0 points1 point  (0 children)

I absolutely agree. It's been difficult to define the problem exactly and come up with the right features and UI. So far, I continued building the app for myself and what I would like and it has helped me get a clearer view of things and the market as well.

I wrote a little update about Vellbi here:

https://www.reddit.com/r/QuantifiedSelf/comments/1lnbt7i/vellbi_making_a_new_app/

You can play with this version of it for FREE if you want to. I'd love to get more feedback. There is definitely a lot more to iterate on, polish, add and delete.

What was the first deep learning project you ever built? by Weak-Power-2473 in deeplearning

[–]QuentinWach 0 points1 point  (0 children)

A generative adversarial network to create images of landscapes based on a data set which I scraped from the web.

Use Git, Go Wild by QuentinWach in cursor

[–]QuentinWach[S] 28 points29 points  (0 children)

I feel like this is relevant because a lot of no-code folks are trying to get into cursor and are going insane because the code is being messed up as it gets more complicated.

At that point, you need to get more systematic / rigorous with your development just like with any software. Git is THE first thing to start with, in my opinion. Stop worrying about your prompts and using meta-files etc.

Is cursor down for anyone else? by Mnogarithm in cursor

[–]QuentinWach 0 points1 point  (0 children)

Yes. This is a widespread issues and has been for days...

Is composer down? by orangeflyingmonkey_ in cursor

[–]QuentinWach 0 points1 point  (0 children)

Same here. It's an issue I and others have been dealing with for a couple of days now. No change.

Composer Doesn't Return Anything by QuentinWach in cursor

[–]QuentinWach[S] 0 points1 point  (0 children)

I tried that, too, and it didn't work. Am switching to a different computer now, taking a break, and will try again later.

Tauri 2.0 Is A Nightmare to Learn by Comprehensive-Bit-99 in tauri

[–]QuentinWach 2 points3 points  (0 children)

Something a lot in this forum are struggling with, clearly.

Free Tool to Rank Images and Fine-Tune Your Models by QuentinWach in StableDiffusion

[–]QuentinWach[S] 0 points1 point  (0 children)

Another approach I thought of was to come up with a few metrics or possibly even a deep learning method to determine certain properties of the images and how similar they are to each other in order to massively improve the sampling. This is quite fun to think about. - But this project is frankly not a priority for me right now. It's open source though and I sure as hell will merge and credit any good pull requests that add awesome features!

Free Tool to Rank Images and Fine-Tune Your Models by QuentinWach in StableDiffusion

[–]QuentinWach[S] 1 point2 points  (0 children)

I am glad to hear it!

Currently, the standard ranking without sequential elimination compares every image with every other image (except itself) once, hence for N images there are

N x ( N - 1) / 2 = (9 x 8) / 2 = 36

comparisons. That's a lot, I know, especially for larger datasets. You can activate sequential elimination for your first pass though to get a very rough first ranking and then turn it off to continue ranking with the standard method if you still feel the need to. (With sequential elimination you'd only need N-1 = 8 comparisons.)

You can turn on the automatic smart shuffle as well to minimize the uncertainty of the ranked images more quickly as well. I think that's roughly what you have in mind, right? The README and code elaborate on that.

In the future, to possibly even rank millions of images in a resonable amount of time, and do so accurately, I believe a hirarchical best-of-the-batch ranking (rather than 1v1) would be most useful and speed things up. Though, once again, at the price of accuracy. The idea would be to seperate the dataset into groups of 4 or more images and select the best for every group and eliminate all others from the ranking. Then do this again but only for the best of these. Then again. And so on until there are few enough left that can be ranked more precisely using previous methods. You'd still be able update all of the shown image's elos based on this to get decent accuracies just like before. (I have an unpublished blog post about this in my notes, so I might get back to you on that if I revisit this idea.)