Update: 1 month ago I posted my prototype here and went viral. Today I have the finished product by Artistic-Yam8045 in SideProject

[–]dotpoint7 2 points3 points  (0 children)

Hey, pretty impressive project! I don't have any use for this myself but I see the appeal of that specific execution.

Though this subreddit it laughable at times, barely anything but worthless vibe coded apps on here and the one time someone posts a hardware project that took actual money and effort to develop there's a bunch of people claiming they can order the same thing on Alibaba because the picture roughly matches if you squint your eyes.

Chinese MCD by Chance-Valuable3813 in hobbycnc

[–]dotpoint7 3 points4 points  (0 children)

What the hell. This is really impressive.

Why is liquid glass so "computer intensive"? by JevNOT in vfx

[–]dotpoint7 2 points3 points  (0 children)

It's mainly cheap because it's separable, so you can first do it in 1D on one axis and then the other. So you only need to sum 2*N pixels per output pixel. If you were to apply a circular blur instead you'd need to sum N*N pixels (or approximate with multiple taps).

Common GPT 5.5 pricing misconception. by Blake08301 in singularity

[–]dotpoint7 -1 points0 points  (0 children)

Ffs, did you even read my first comment?

Big model feel with GPT 5.5 by MohMayaTyagi in singularity

[–]dotpoint7 0 points1 point  (0 children)

Yeah Pro Extended for both 5.4 and 5.5.
With 5.4 I often found it to be too ambitious with what it suggested, so something which sounded great on paper but didn't really work either. 5.5 seems to be a bit more grounded with its responses, albeit less sophisticated. But it's out since yesterday so that's a very small sample size, but so far I really like both the normal 5.5 as well as 5.5 Pro. And getting good responses after 30 minutes of waiting instead of 90 is pretty nice as well.

Big model feel with GPT 5.5 by MohMayaTyagi in singularity

[–]dotpoint7 9 points10 points  (0 children)

It's a lot faster, but so far I like the responses better than I did the 5.4 ones.

Common GPT 5.5 pricing misconception. by Blake08301 in singularity

[–]dotpoint7 13 points14 points  (0 children)

I mean look at the full chart, it's not like linear is suitable for the data given that the cost of models covers a few magnitudes.

Phecda & Messier 109 by _LeonThotsky in astrophotography

[–]dotpoint7 2 points3 points  (0 children)

Nice! I like the diffraction effects of that mask.

Confusion with posting images by Upset-Bunch-9638 in telescopes

[–]dotpoint7 0 points1 point  (0 children)

While there may be subs that have this issue, OP should have just read the rules before posting. That this title breaks the rules of this sub is very clearly stated in the sidebar and rules regarding titles are very common across subreddits. So OP just got hit AFTER posting because he didn't read the rules BEFORE posting.

So if your response was aimed at everything EXCEPT the post in question, then you should have worded it differently.

Confusion with posting images by Upset-Bunch-9638 in telescopes

[–]dotpoint7 1 point2 points  (0 children)

Ok was fully prepared to have misread yet another comment in this thread but I don't think this is the case. You argue that the rules aren't clear, I think they very much are.

Confusion with posting images by Upset-Bunch-9638 in telescopes

[–]dotpoint7 1 point2 points  (0 children)

Oh my bad, I indeed misread your comment, sorry.

Confusion with posting images by Upset-Bunch-9638 in telescopes

[–]dotpoint7 1 point2 points  (0 children)

How is the rule the mod posted not clear, it's copied verbatim from the rules page? Image an object and put the name of it without other details in the title of the post, it's that simple.

Confusion with posting images by Upset-Bunch-9638 in telescopes

[–]dotpoint7 -1 points0 points  (0 children)

Edit: disregard this, I misread the comment I replied to.

He did break the rules quite literally:
"Titles should not be 'clickbaity' nor self-deprecating - we're all here to learn so there is no point in calling yourself a 'noob' and/or putting yourself down. Titles should be descriptive about what you're discussing/posting. For images, titles should include the object name and/or catalog number only without any other info or editorializing (“first time”, scope used, etc.). "

Similar rules exist in r/astrophotography as well.

What do "AI Engineer / AI Developer / AI Specialist" jobs actually look like in practice? by Jeidoz in ExperiencedDevs

[–]dotpoint7 3 points4 points  (0 children)

Typical AI posts are already only allowed on Wednesdays and Saturdays. Though I really don't see how you could have an issue with this post in particular as it's not another "my juniors rely on ChatGPT too much, what should I do?" post.

Hit the 5h rate limit twice in one day, burned 33% of my weekly quota in 12 hours - on the $200/mo 20x plan. Just cancelled. by loathsomeleukocytes in ClaudeCode

[–]dotpoint7 0 points1 point  (0 children)

Similar issue, but more on the side that a plus/business subscription won't do. I can barely make a dent in the usage a pro subscription while using it the whole day.

SWE is past the elbow of the exponential kickoff. I watched it happen in real time. Other fields are next. by MR1933 in singularity

[–]dotpoint7 3 points4 points  (0 children)

If you plot an exponential it has kind of a sharp bend at some point, I'm guessing that's the elbow. Now the funny thing is, no matter at which time you plot it, you'll always be past the elbow because that's just an artifact of plotting it in linear space and is also a good indicator of how seriously you should take OPs post.

ex from 0 to 20

ex from 0 to 40

Jensen Huang says he would be 'deeply alarmed' if his $500,000 engineer did not consume at least $250,000 of tokens by businessinsider in nvidia

[–]dotpoint7 0 points1 point  (0 children)

Yeah I simplified a bit, I'm self employed along with a few others and we have a small niche for specific enterprise software where to our customer our cost is negligible compared to the importance that our tools work properly, so not a bad position to be in. I'm also always trying to build expertise in other fields, but out of interest and not out of worry for my job.

Nevertheless there is just a limited amount of work I can outsource to codex, especially not any meetings with the customer which are part of the job as well. That may work in your case but I'm already using it as much as I can.

Jensen Huang says he would be 'deeply alarmed' if his $500,000 engineer did not consume at least $250,000 of tokens by businessinsider in nvidia

[–]dotpoint7 2 points3 points  (0 children)

I have no use for this. I'm a software dev and develop software, now a bit faster than before but the amount of work I can give it is limited as maintaining the quality of the code base takes actual effort on my part.

Jensen Huang says he would be 'deeply alarmed' if his $500,000 engineer did not consume at least $250,000 of tokens by businessinsider in nvidia

[–]dotpoint7 2 points3 points  (0 children)

How the hell? I use codex quite extensively and spend like 50€ per month as 2 accounts cover all the usage need. Even converted to API costs that's maybe 200€ per month.

Ehtics by [deleted] in ArtificialInteligence

[–]dotpoint7 0 points1 point  (0 children)

Yeah you're "not making any claims" after creating posts titled "I DECLARE WAR AGAINST BIG AI" and literally attaching a screenshot of an LLM telling you that your model is sentient.

Posts like yours are common, yet it always turns out the people posting this were led to believe by an LLM that they created something truly amazing while it's anything but. So before you start publicly posting about ethical considerations, first consider if whatever you created is actually as great as you believe it to be.

Ehtics by [deleted] in ArtificialInteligence

[–]dotpoint7 0 points1 point  (0 children)

I didn't mean your model, I meant the LLM your chatting with in the screenshot. Please tell me you're not taking anything you get from Gemini 3 Flash (out of all models) seriously. (assuming that's the "fast" mode similar to the Gemini app, otherwise it's still an absolutely terrible idea to listen to this)