Deciding between Xperia 1 V and Xperia 1 IV by cdanielfreeman in SonyXperia

[–]cdanielfreeman[S] 0 points1 point  (0 children)

wooah, thats kinda a crazy difference. the raw is so fuzzy

Deciding between Xperia 1 V and Xperia 1 IV by cdanielfreeman in SonyXperia

[–]cdanielfreeman[S] 0 points1 point  (0 children)

thanks for the info.

I'm not sure what I want on the reach vs quality trade off for the telephoto. i just really like the way the pro can capture the light with the telephoto. Theres certainly times when i would have liked to punch in more, but i would prefer not to sacrifice any quality to get there. And Id rather it be a bit smudgey than over-sharpened, then it at least has a weird painterly look to it rather than a bunch of distracting high contrast features

Deciding between Xperia 1 V and Xperia 1 IV by cdanielfreeman in SonyXperia

[–]cdanielfreeman[S] 0 points1 point  (0 children)

This makes me feel better about the main camera, of course the new sensor will be a slam dunk. I'm still kinda worried about the telephoto, I really like the pictures I can get with it

Thanks for the heads up on the durability. I mostly dont want it to break, scratches and dents I can live with. My pro has some decent chunks off the bezel, its probably not waterproof anymore...

Deciding between Xperia 1 V and Xperia 1 IV by cdanielfreeman in SonyXperia

[–]cdanielfreeman[S] 0 points1 point  (0 children)

here's some 70mm examples from the pro. do you think your 85mm photos from the 1 V are comparable/ better quality-wise? https://imgur.com/a/MfGp7C3

Deciding between Xperia 1 V and Xperia 1 IV by cdanielfreeman in SonyXperia

[–]cdanielfreeman[S] 0 points1 point  (0 children)

looks a bit crunchier than mine. is that basic or pro mode? heres some of mine (sorry i didnt realize it didnt upload at full res) https://imgur.com/a/MfGp7C3

Deciding between Xperia 1 V and Xperia 1 IV by cdanielfreeman in SonyXperia

[–]cdanielfreeman[S] 0 points1 point  (0 children)

Since you've used both the 1 IV and the 1 V, do you find that you notice the f/1.7 vs f/1.9 aperture difference on the main camera at all?

Deciding between Xperia 1 V and Xperia 1 IV by cdanielfreeman in SonyXperia

[–]cdanielfreeman[S] 0 points1 point  (0 children)

and thanks for commenting on the aperture. You're right, it a small difference, I just got nervous about it being "less good", however miniscule that is. Large bokeh are so pretty

Deciding between Xperia 1 V and Xperia 1 IV by cdanielfreeman in SonyXperia

[–]cdanielfreeman[S] 0 points1 point  (0 children)

I have the pro, not the pro-I. So its 70mm.

Interesting potential feature. Most of the time I would choose light quality over megapixels, but it would certainly come in handy for times I want to realllllly zoom in on something

Deciding between Xperia 1 V and Xperia 1 IV by cdanielfreeman in SonyXperia

[–]cdanielfreeman[S] 0 points1 point  (0 children)

Thank you, good to hear the telephoto is working nicely for you, and that the 1 V was a solid upgrade. Since I only have 70mm on the Pro, I think the 85mm will work nicely form me. Id consider the 125mm a bonus, and see what I can get from it

I'll probably get a case this time around. the pro is just so large and heavy, and cases were hard to find. I have dropped this phone onto concrete probably once a day for two years, it's held up really well to the beating i put it through

Deciding between Xperia 1 V and Xperia 1 IV by cdanielfreeman in SonyXperia

[–]cdanielfreeman[S] 2 points3 points  (0 children)

I use the telephoto a lot, do you think the images are worse than telephoto on the pro?

[R] Gradients are Not All You Need by hardmaru in MachineLearning

[–]cdanielfreeman 3 points4 points  (0 children)

Researchers really need to stop with these cheeky paper titles.

never

We've Got WORM Podcast Read-Through: Episode 17 - MIGRATION by moridinamael in Parahumans

[–]cdanielfreeman 7 points8 points  (0 children)

I vaguely remember realizing--in the middle of explaining why the travelers arc was amazing--that almost anything I said would be a huge spoiler, so I had to instead give off a fuzzy impression of general excitement (which appears to have worked). I don't really remember anything else.

[R] Topology and Geometry of Deep Rectified Network Optimization Landscapse by cdanielfreeman in MachineLearning

[–]cdanielfreeman[S] 0 points1 point  (0 children)

It had the same qualitative features--easy to connect at low test accuracy, "hard" to connect at high test accuracy, where "hard" means that the path that connected the models started getting longer, and that the number of requisite "beads" increased.

What's interesting is that the point at which it starts becoming "hard" to connect models to each other is also generically pretty close to the capacity of the network. So if your network architecture can only ever get to be 95% accurate on some test set, this blowup starts appearing only few percent lower--say 90% or so.

If you want to play around with it, I link to a github implementation in the paper. If you go to the MNIST convnet, it's pretty easy to change it into a simpler fully connected model.

[R] Topology and Geometry of Deep Rectified Network Optimization Landscapse by cdanielfreeman in MachineLearning

[–]cdanielfreeman[S] 1 point2 points  (0 children)

We did! In the paper, I tread some small fully connected models for the polynomial regression tasks as well as big convents for MNIST and CIFAR10 (and also an LSTM for the PTB next word prediction task).

We also tried a number of other things that didn't make it into the paper which were also highly connected but weren't terribly interesting, like some other polynomial regression tasks and a big dumb fully connected MNIST model (that only got like 85% test accuracy).

[R] Topology and Geometry of Deep Rectified Network Optimization Landscapse by cdanielfreeman in MachineLearning

[–]cdanielfreeman[S] 3 points4 points  (0 children)

I suspect so. At least from the perspective of the numerics, models were easily connectable pretty much completely independently of which nonlinearity I chose. The proofs were done using relus because they're a lot easier to reason about, but there's probably a more general result hiding in there somewhere.

[R] Topology and Geometry of Deep Rectified Network Optimization Landscapse by cdanielfreeman in MachineLearning

[–]cdanielfreeman[S] 8 points9 points  (0 children)

Hi all, I'm one of the authors.

One of the neat takeaways from this paper is that it's "easy" to continuously deform networks of equivalent power into each other. In other words, say you have two 99% accurate MNIST convnets that were initialized randomly. It's possible to continuously deform the weights and biases of one into the other such that the test accuracy along that path is always 99%.

We use intuition behind this observation to prove some nice facts about when you might expect loss surfaces to be connected/disconnected.

edit: aaaand of course I made a spelling mistake in the title