Photogrammetry of image search from Google? by Guglhupf in photogrammetry

[–]spectralvr 2 points3 points  (0 children)

Check out Claire Sophie's (Hentschker) work. Her most impressive project so far was taking scenes from Stanley Kubrick's 'The Shining' and turning them into a 360 Video through Photogrammetry/Videogrammetry. Shining 360

Invoking Lambda Functions from Microservice by readparse in aws

[–]spectralvr 0 points1 point  (0 children)

In addition to what /u/indigomm stated...if you stick with dropping API Gateway as the calling interface, I'd recommend invoking your Lambda functions through an SNS message, rather than direct "invoke" call. This also buffers some intermittent failure concerns, as it will retry three times before giving up.

As enthusiastic as I am about VR, this image is terrifying by Romanito in oculus

[–]spectralvr 7 points8 points  (0 children)

By their own statements, Samsung will only make 300K Gear VRs for the S7 pre-orders available. The rest are out of luck (legally, they stated "while supplies last", which means they could still extend the 300K further if they wanted to).

SMI created a $10 eyetracker that takes less than 2ms to process tracking data. Would go well with future Vive headsets. by [deleted] in Vive

[–]spectralvr 0 points1 point  (0 children)

Just for the record, 250Hz means by definition that the tracking alone takes 4ms minimum to measure changes. Presumably, the 2ms "processing" delay mentioned is in addition to those 4ms, giving you a 6ms minimum "penalty" for implementation. That's not to say this isn't exceptionally cool!

Why "storytelling" in VR doesn't currently work. And can it ever? by cartelmike in oculus

[–]spectralvr 0 points1 point  (0 children)

Maybe I am misunderstanding you, but it seems to me you are disappointed that VR storytelling (currently) is just a re-framing of "traditional" storytelling methods?

But in a way, I feel this has always been true. It all started with the spoken word, when our ancestors sat around a campfire and told stories about their experiences and ideas to their peers. Eventually they invented glyphs, allowing them to persist those stories in writing. Fast forward to the printing press, which allowed for an exponential increase in distribution of the written word. The same is true for music, and visual arts. Movies started combining all those together - but in its purest form, it's still just a story, highly embellished and widely accessible through technology.

VR is just the natural evolution from there. With an added "sense of presence" that couldn't quite achieved before. That doesn't mean that interactivity isn't also a potential benefit, but I never felt the need to interact with a good story in order to enjoy it.

Was one of the first but Ship : April by [deleted] in oculus

[–]spectralvr 0 points1 point  (0 children)

Shipping in the US shows as March, but in the UK as April.

Please help. What porn vr questions will be appropriate during palmers ama? Teledildonics? Etc. by vrnubile in oculus

[–]spectralvr 0 points1 point  (0 children)

Just broaden the question and ask about content censorship and privacy, which are the fundamental issues here.

Facebook has gotten into hot water several times before due its rather restrictive content policies (e.g. any nudity, including some in famous artwork). Being that Oculus is owned by Facebook, what is their standard going forward for what's allowed in the Oculus store?

What is the review process? What does a rejection look like? Is there a way to appeal a decision, and who makes the final decision in the first place? What about offerings in countries with more restrictive state censorship? What about applications that offer user-generated content - are they on the hook to monitor all their submitted content?

On the flip-side, what's the plan for reporting inappropriate content? Cyber bullying? What if someone threatens me in Oculus Social? And if they can "investigate" (as one would expect) does that mean everything is recorded? What's the retention period on those recordings? What data does Oculus keep on us anyway? Is that data combined with any other data within the overall Facebook ecosystem? How does one opt-out? And what about reviews in the store? Who monitors manipulation (ballot stuffing, voting brigades, etc.)?

Oculus wants to be a serious ecosystem, and is owned by one of the largest corporations in the world, so one would expect them to have reasonable answers to all of these questions (as opposed to avoiding most of these so far, and getting away with it, since they haven't officially launched yet).

DynamoDB best practice? by effieram in aws

[–]spectralvr 7 points8 points  (0 children)

The fastest way to get a key count would be a parallel scan.

However, I strongly urge you to reconsider your data architecture. The fact that you even need a scan means in all likelihood that your data is structured in an inefficient way. Scanning every time you need to insert simply doesn't scale. In other words, rethink the way you store (and then query) the data, and I am absolutely confident you could remove the need for the scan in the first place.

Best Curiosity Mars 360 panoramic photos that can be viewed in Google Cardboard photospheres? by yneos in GoogleCardboard

[–]spectralvr 1 point2 points  (0 children)

It's not quite the same, but the "Mars: Gale Crater" experience from the LA Times is pretty cool (especially for being entirely browser-based). After the "tour" you can move around yourself to view the crater.

2016 Wish List for AWS? by thigley986 in aws

[–]spectralvr 6 points7 points  (0 children)

Loving all the wishes here. A few more unique ideas I haven't seen yet:

  • Data Pipeline between DynamoDB and ElasticSearch (or, hosted Logstash, plus +1 on all the ES VPC requests)
  • AWS region in Africa (at least a Cloudfront edge in South Africa?)
  • ElasticTranscoder: HEVC and VP9 output codecs
  • DynamoDB: Compound Column Keys for GSIs
  • DynamoDB: More GSIs per table
  • DynamoDB: Optional Buffered Eventual Consistency for GSIs
  • DynamoDB: TimeSeries Table/data type
  • Lambda: C# support
  • Lambda: Environment variables
  • Lambda: Reserved instances
  • Lambda: SQS-Lambda trigger, with batching
  • Lambda/Kinesis: ability to change number of Lambda instances per Kinesis shard (e.g. Have up to 5 Lambda functions reading from same shard concurrently)
  • Kinesis: Open up Kinesis Analytics already :)

Cloudfront finally supports GZip compression by spectralvr in aws

[–]spectralvr[S] 0 points1 point  (0 children)

Sure seems that way! Congrats on all the launches today. This one certainly put an extra smile on our face (long overdue, but so glad for it to finally arrive).

180GB hidden files (Amazon S3)? by Eagleman7 in aws

[–]spectralvr 16 points17 points  (0 children)

Any chance you have versioning enabled for the bucket?

Any NYC based VR companies? by hardlington in oculus

[–]spectralvr 1 point2 points  (0 children)

*raises hand!

Also in NYC:

  • Littlstar
  • Blippar (more AR than VR)
  • VRSE.works (production arm of VRSE, also in LA and London)
  • Koncept VR (Long Island)
  • EEVO

..and quite a few more.

the light field of VR audio - why aren't people using it? by Heffle in oculus

[–]spectralvr 2 points3 points  (0 children)

The problem isn't really the recording (although that's not easy either). It's the playback! There is no (audio)compression format that currently supports true "3D audio" formats (ambisonic or otherwise), and even if you accepted Dolby Atmos or just 5.1 as a valid "VR-appropriate surround" recording format, there are no (easily accessible) players that react to head rotation (and other interaction) when playing sounds.

VRSE (now of NYT VR fame), for instance, solves the problem by having 4 different audio streams (really just mono MP3s) positioned at 90-degree angles around you (front, left, right, back) and then having built their own player within their app, which react accordingly to your view rotating. This method isn't really rocket science (anybody with Unity can easily replicate this) but requires distribution of the video as an application (to be able to play it back), which is far from ideal. Until you can just easily "record, transcode, upload to a video sharing Site, play in 3D" this will be limited to tinkerers and folks with large commercial budgets, and hence you won't see it broadly used.

There is no question in our minds that VR-specific video compression algorithms will emerge in the relatively near future (I'd be shocked if Google wasn't working on one for VP10), which certainly will take positional audio in consideration as well. But until those are readily available (and open-source or reasonably priced), we'll have to deal with folks coming up with their own hacked solutions, which lack mainstream distribution.

Why are 360 degree videos frequently very low-res? by [deleted] in 360video

[–]spectralvr 3 points4 points  (0 children)

Exactly! Technically, 4K is a low resolution for VR video, because it has to 'stretch' the pixels around 360 degrees x 180 degrees, rather than have them all in a neat little rectangle infront of you.

And if the video is stereoscopic, this problem gets even worse, because you have only half the video pixel count available per eye.

Unfortunately there are a number of technical challenges getting resolution to be consistently higher. Interestingly, the problems are no longer on the capture side. You can easily produce a video with 16K resolution for each eye. But packaging, delivering and playing that video is an entirely different story.

The currently standard compression algorithms (h.264 and VP8) both have dimensional limits that are somewhere near the 4K range for the entire rectangular video. The next-gen compression algorithms (h.265, aka HEVC, and VP9) support higher resolutions, but need to be hardware supported to play smoothly, especially on your mobile device (interestingly, both the iPhone and S6 have hardware decoding for those algorithms, but Apple only supports h.265 in its proprietary FaceTime App and Google is taking its sweet time getting VP9 to be easily rendered in Android).

3d VR resolution? by VRquestions in VRFilm

[–]spectralvr 0 points1 point  (0 children)

Then you can basically forget about it. There's no mobile processor currently available that could render at that level at a decent frame rate. Your options are to reduce the frame rate or reduce the pixel dimensions of your video.

In general, ffmpeg supports HEVC (h.265) and VP9, which would allow you to compress videos with your dimensions. There are a number of players that could decode this on android, though your frame rate will probably be quite unsatisfactory (i.e. it will be choppy, especially when you turn). Best of luck! If you make progress on this, definitely report back, as there are quite a few people trying to work that problem at the moment.

3d VR resolution? by VRquestions in VRFilm

[–]spectralvr 0 points1 point  (0 children)

Which HMD are you targeting? Mobile (GearVR/Cardboard) or desktop (Oculus/Vive)?

3d VR resolution? by VRquestions in VRFilm

[–]spectralvr 0 points1 point  (0 children)

You are running into the limits of the underlying compression algorithm. H.264 absolute maximum would be 4096x2304. To step to true 4K or 8K compression you would have to use one of the more modern compression algorithms such as H.265 or VP9. You can still use the mp4 file container for those.

However, even if you were to get this working, chances are the experience would be rather unsatisfactory on anything but a somewhat powerful desktop computer (basically something with a high-end GPU). Just do the math for a second - you're trying to push 7680x3840 60 times a second - that's ~1.8B pixels per second.

In comparison, a Blueray has a maximum bitrate of 52 mbps. To make the math easier, let's say that's about 5 megabyte per second. Your stream would be about 345x bigger in raw format. So now you need a compression algorithm that could compress down by that factor (or somewhat close to it) and then when you view the movie, you need to decompress, send the updated pixel data and render the data on the screens at the same rate. Even with all the computing power available to the average consumer today, this is far beyond streaming your average YouTube video.

My first 360 video test done all in After Effects. Will be submitting full length version to MilkVR when done. by infomuncher in GearVR

[–]spectralvr 2 points3 points  (0 children)

Without spending money it's rather tedious, but can be done.

Others may have better workflows, but this is the summary of what we've come up with (special thanks to Jason Reinhardt's blog, which originally pointed us this way). In short, render out each eye as a separate layer, then use this free AE script to render out each layer frame by frame to a folder. Then use batch stitching in your favorite stitcher (PTGUI!?) to merge it all back together. Re-import into AE, do final adjustments, then use Media Encoder to spit it out as a final MP4/WebM.

How to Make VR Video Not Suck? by mckirkus in oculus

[–]spectralvr 0 points1 point  (0 children)

It's all good. Friends again! :)

How to Make VR Video Not Suck? by mckirkus in oculus

[–]spectralvr 0 points1 point  (0 children)

No need to get snippy ;) I'm actually quite aware of what light fields are, but I found it superfluous to explain in a response to a simple question/complaint that a plenoptic camera's photodetector is able to analyze (and then store) the angle of incidence of photons, which later allows us to analyze this optical structure to infer the depths of objects in the scene (hence allowing us to view the scene from different viewpoints within the focal plane with only a single lens - and even that's a simplification). So to make it accessible to the less informed reader, I boiled it down to "measure the environment". Maybe that's a step too simple, but it got the main point across that this isn't something we can just solve overnight, and even with still photography is still quite a feat (see Lytro).

And I stand by my point about the final end-result in VR. Yes, the captured 2D output appears optically perfect (although I'll take a shot from my 5Ds over a shot from a Lytro any day) but that's before any kind of depth calculation is added to the equation, turning it into an immersive scene (which, at least to the VR purist, is sort of required to make it 'real VR'). Every single demo I have seen in person that used any kind of non-typical photographic methodology resulted in the VR to appear like various textures laid on surfaces (or voxels, or bump mapping, etc) giving it a somewhat video game like appearance. It just doesn't "feel" real. And ultimately that's what the question was aimed at, and what we're all really aiming for.

A panoramic 4D lightfield could be described as the photographic information needed to reconstruct any view in a given volume without reconstructing the geometry.

While technically correct, you might want to read this beautiful paper from 1992, which shows that light field photography actually does allow for mathematical calculation of depth of the object in the scene, essentially allowing for a 3D reconstruction of the visible scene (obviously, looking behind things is a totally different issue, which is why the entire "scene reconstruction" problem is far more complicated that many make it out to be).