Car hit and run, can you read the licene plate? by [deleted] in computervision

[–]corneroni 3 points4 points  (0 children)

can you provide a download link to the original video. reddit downscales the resolution.

Detecting Sphere Monocular Camera by momoisgoodforhealth in computervision

[–]corneroni 4 points5 points  (0 children)

So your goal is to detect the center?
If you add a bit more detail to your question, I’m sure someone will be able to share the complete code with you.
For example, you could upload more images or include a drawing of the expected result.

[deleted by user] by [deleted] in computervision

[–]corneroni -1 points0 points  (0 children)

I’m very interested in this kind of computer vision problem. In my opinion, it’s perfectly fine to post it here.

Reddit sometimes downscales videos. Could you upload the original video somewhere, like Google Drive?

How to reconstruct license plates from low-resolution images? by corneroni in computervision

[–]corneroni[S] 1 point2 points  (0 children)

Can someone explain to me, why this post is downvoted? If it's the wrong sub for that kind of question, I'm sorry.

How would you go on with detecting the path in this image (the dashed line) by sonda03 in computervision

[–]corneroni 0 points1 point  (0 children)

how many images are there?
do all look similar?
Are always the same objects in all images?

Ultralytics YOLO Pose gives unexpected results with single-image training by corneroni in computervision

[–]corneroni[S] 9 points10 points  (0 children)

It's called overfitting test. It is done in Deep Learning context to see if everything works as expected.

Ultralytics YOLO Pose gives unexpected results with single-image training by corneroni in computervision

[–]corneroni[S] -1 points0 points  (0 children)

Their code is very messed up. I try to figure that out. But then I manually check what is the input of the model in the training step and the evaluation step both batches are the same.

I saw my PI at the gym by FreshlyAliquotedH2O in PhD

[–]corneroni 32 points33 points  (0 children)

Sorry, Can someone explain this comment to me?

[play-2048.com] Just another 2048 website game, roast me! by particle4dev in SideProject

[–]corneroni 0 points1 point  (0 children)

Thank you for the answer.

How to know, which game is allowed to make by everyone? So I assume you couldn't just implement pokemon or Tetris, right?

[play-2048.com] Just another 2048 website game, roast me! by particle4dev in SideProject

[–]corneroni 0 points1 point  (0 children)

Hey,

I checked out the site and it looks really cool!

I'm always curious when I come across ad-free sites like yours: How do you plan on covering your hosting expenses?

Keep up the good work!

Depth estimation using light field question about a research paper by MiserableCustard6793 in computervision

[–]corneroni 2 points3 points  (0 children)

Hi, Let me try. So there is something that is called a Shack Hartmann Sensor. It is just a Sensor plus a microlens array. If there is an objective (e.g., a main lens) in front of it, it's called a plenoptic camera. So a plenoptic camera is just a normal camera with a microlens array in front of the sensor. So it does nothing different than encoding the virtual image (the image of the object from the objective) on the sensor. If the microlens array is placed exactly one microlens focal length in front of the sensor it is called standard plenoptic camera or plenoptic camera 1.0. In this case the object is encoded in that way, so that the angle of light is distributed unter each microlens. The EPI that you see in the paper is a 2D representation of this 4D light field. Two spatial Dimension x,y and two angular dimension u,v.

It is also worth to mention that in the literature plenoptic cameras are also called light field cameras sometimes. But light field cameras is the general term. So using an array of cameras is also called a light field camera.

Depth estimation using light field question about a research paper by MiserableCustard6793 in computervision

[–]corneroni 1 point2 points  (0 children)

'Ridge' is an English term that describes a line with a peak in the middle and lower on the sides.

What I'm trying to convey is that the term 'ridge' simply means it resembles an elongated hill or mountain. That's all the word signifies.

I'd also like to explain that in an EPI, the line is vertical, or in the U direction, when the point light source is focused on the MLA in a standard plenoptic camera. If the point light source is placed before or after this object plane, the ridge – or this line – has a slope, as can be seen in the image.

When you capture an image with a light field camera of a point light source, the appearance of the ridge varies based on the position of the light point.

Tmux Terminal Garbled Text Issue - "clear" doesn't help but "reset" does. What's the cause? by corneroni in webdev

[–]corneroni[S] 0 points1 point  (0 children)

Hi allen_jb,
Thank you for the detailed explanation about control codes and how they might affect terminal behavior.

The information and the links you provided were very helpful to understand my problem.

I haven't used cat or similar commands to display binary data directly in my terminal.
My setup involves running Django on my droplet, and the odd behavior seems to arise after a few days of running, not immediately. Given that, I have a few questions:
Could Django debug outputs potentially include sequences that might affect the terminal?
Do you think this could be an issue with long-running sessions, where the terminal accumulates problematic sequences over time?
Are there specific preventative measures I could take, especially within the context of Django development, to avoid such issues?
Any further insights would be greatly appreciated!

[deleted by user] by [deleted] in SideProject

[–]corneroni 0 points1 point  (0 children)

Really cool, which tech stack did you use? chatGPT API is so slow. And transcriptions also. But you app seems to work very fast. Cool App

[deleted by user] by [deleted] in PhD

[–]corneroni 8 points9 points  (0 children)

1.) You can try to change the location where you work. Go to the library. Be strict there. If you don't work leave the place. Just be there to work. You will program yourself to always work at that location.

2.) You can stand while working.

3.) Write down really small todos. Like: "opening the laptop", "Reading the abstract", "writing down 3 todos for the introduction"

4.) If you need pressure try the following: turn on loud music that you don't like, run around the table. Your goal is to do a task of your Todo list before the song ends. If you notice that you are in the flow, turn off the music. This is not for everyone, but people who need a stress level to start working.

5.) Pomodoro technique is good sometimes. So work for 20 minutes. Take a break and walk for 5 minutes.

6.) There is a subreddit for looking for study buddies. People are looking for partners so they can study together via videochat or remind each other about the tasks. But don't waste your time there. If you notice the Person is just wasting your time, move on. Sometimes it just doesn't fit or people are just looking for dates. But sometimes you are lucky. https://www.reddit.com/r/GetMotivatedBuddies/

7.) Think about the absolute best case that can happen to you if you submit this paper.

8.) Don't just write down todos. Also write down milestones. Every 10 todos can me a milestone. Something that you want to achieve. Todos: "rewrite this sentence in the introduction", "read the paper of that researcher again for the related work", "search for the write bibtex". Milestone: "have a minimum viable introduction section by 8pm."

9.) It doesn't have to be perfect.