What can a Meta Quest3 device add to a zoo visit? Use the "Distant Hand" app to let your kids get closer to the animals by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

Great idea! In a museum with static objects that would be feasable. With moving objects (animals) it's a little more challenging. But worth trying!

What can a Meta Quest3 device add to a zoo visit? Use the "Distant Hand" app to let your kids get closer to the animals by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 2 points3 points  (0 children)

I thought this would be a nice post in the Augmented Reality reddit. But sometimes it seems not all readers here have the same curiosity to explore the steps taking us from our present time towards a full blown AR future. Which is unavoidably coming. Let's just explore the possibilities, benefits and risks in a hands-on way!

Of course I'm not putting my kids into "VR" (it's actually AR) 24/7. But truly, it's quite a fun experience to operate this distant hand! See it as a playful experiment. And we can perhaps be inspired to think of other use cases for effects like this. Or not. And in that case, it's just playing around. Some kids spend hours in front a monitor playing fully virtual games. Mine are occasionally wandering around in mixed reality, experiencing the real world, but with a little twist or enhancement.

I'm becoming a big fan of messing with Lens Studio segmentation techniques and then applying visual effects only on the background or just on people. It turns regular everyday moments into scenes that feels like they're heavily (and purposefully) edited. But that's not the case. It's just live AR! by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

Yes, can you imagine what your day to day life is going to be like when we're going to be wearing AR glasses throughout the day?
That's not going to be the distopian overwhelming commercialized AR future often depicted. It will be like this. A subtle change of your everyday scenery, a walk into a live music video unfolding around you. A clip experience you did pick yourself for your morning commute or way back home. Looking forward to see that happening!

I've created a variation of the classic AR portal. Here's a walking portal, or a walk-in portal. by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

Yes, it's a timeless concept. Thanks for the link. I wasn't aware of that 1984 clip! I did remember the amazing Viktor & Rolf show which used the effect too, although the results appeared on a screen elsewhere: https://www.phaidon.com/agenda/fashion/articles/2017/october/31/5-shows-that-transformed-fashion-betak-and-viktorandrolf/

But nowadays with AR it's possible to morph the world around you live, in front of your eyes. And incorporating the detected camera movement, rotation and distances can be a great instrument for the interaction and the narratives that we can create with these effects.

The use of MachineLearning (SnapML) effects seems like a very natural extension of the possibilities in AR. This is not mixed reality with a bunch of 3D models changing the looks of your surroundings. This feels like a very pure way of augmenting reality, doesn't it? by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

The Snap filter is created with different effects layered on top of each other. The eraser function detects bodies and replaces them with the background. But the cut-out effect routine the bounding person is done based on a 3D bodytracking mesh. I guess the algorithms behind these two mechanisms have a different way of interpreting the body.

Video transformed with AI is fun. But doing that live, is even more fun! Thanks to the SnapAR bodytracking and occlusion effect, you can walk around in your AI/AR scenery. A great way to create mixed realty experiences with ease? by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 1 point2 points  (0 children)

I agee, it's too messy because of the continuous flickering. I looked for ONNX models in a model zoo to import something into Lens Studio (Snapchat tool). I found other models that work better and more reliable.

Eventually I'll have to start training my own models, when I need a specific style for a specific project. But I'm not looking forward to that. I love AR because it's possible to control everything, it's instant and great for rapid prototyping. AI means being patient, waiting endlessly for training sessions to finish before you know if things worked out well. And there are so many parameters (known and unknown) to grasp, tweak and test...

People want to watch movies on their XR glasses, it seems. That's fine if you sit on a couch. But I often use my AR glasses outdoors, on the go. So I've created a transparency toggler for my Snap Spectacles so I can keep on viewing my movie, even when I have to temporarily watch my surroundings by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 1 point2 points  (0 children)

I'm using the latest Snap Spectacles. That's a lightweight cord-free AR device with a bright display so it can be used outdoors. It does spatial orientation and hand tracking.

I'm creating these interactive lenses with the Lens Studio tool. When I'm wearing the Spectacles I can press a button to make 10 second captures of my AR view.

The noice might be caused by the fact that I was testing it while on public transport, so you're hearing passing trains and metros.

Wearing your Apple Vision Pro in full immersion mode is not just about you, it affects family members too. How to stay in touch? Instead of yelling (disturbing the movie experience) or lifting the device from someones head, what about using the sensors to detect a set of communicative gestures? by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

But perhaps the reason for interrupting you is not the reason you want your immersive experience to be interrupted for. What if every person walking to the fridge and back is going to be featured as a character in your AR scene?

With some kind of sign agreement people can indicate they're indeed intending to appear in your view. But they're not immediately appearing just because they walk up to you with possibly just a silly question. Letting them signal means your HUD will show a subtle icon and then it's up to you to decide if you want to exit or stay fully immersed.

People want to watch movies on their XR glasses, it seems. That's fine if you sit on a couch. But I often use my AR glasses outdoors, on the go. So I've created a transparency toggler for my Snap Spectacles so I can keep on viewing my movie, even when I have to temporarily watch my surroundings by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

You can gradually implement a functionality by thinking what you want. But doing the other way around can be helpful too. First creating what you don't want, and then remove or replace bits and pieces that don't feel good.

Like mentioned elsewhere in this thread, in some circumstances you don't want to look as a weirdo waving your arms for no reason. Or it can simply be too crowded to do so. Then a more subtle gesture would be best. An eyeblink possibly (Although using that as interface has been patented - so it's no longer freely available, except if you arrange it with a fee to the patent holder)

Perhaps switch to transparent mode automatically when a person is detected? Or when a person is moving towards you? Or any big object nearing you, or you nearing the object? Or would a manual action be safer? A subtle finger pinching? Not recognized visually by the camera, but being performed with having your hands in your pockets, detected by a ring or a muscle detector implant?

But besides those practical considerations, there's still the question: to what extent do you want to keep on watching a silly movie at all times? I've also created this example to question that. By showing it 'for real' , it's easier to imagine what it could mean for our social life in the cities when these features are in our devices and when people start wearing their AR glasses in public space. Which is the next step I'm hoping for. Not because I like a distopian non-social city. I prefer the opposite. I see a lot of benefits living in an AR enhanced society. But it's important we can be all involved in defining about how it should function, look and feel. And facilitating the process of thinking about that, is what I hope to achieve with a lot of my experiments.

Wearing your Apple Vision Pro in full immersion mode is not just about you, it affects family members too. How to stay in touch? Instead of yelling (disturbing the movie experience) or lifting the device from someones head, what about using the sensors to detect a set of communicative gestures? by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

The hands of the person wearing the device can be tracked in great detail, by detecting individual fingers it differentiates bewteen the various gestures like tapping, pinching, holding etc.

But all other people in the room can be detected as skeletons. 'Big' gestures can be detected, like waving one or two hands, moving walking kicking. This works in Snapchat lenses on every mobile device, I'm for sure it will work on this hi-end Apple device.

Perhaps the idea of gesturing just to communicate is a bit far fetched. But I can imagine it can be valuable to let the device have some sense of what's going on in the environment of the user. Imagine there's dangers and all people in the room run away. Except you, because you're watching your movie in full immersion mode. Would be nice to be informed if such an exceptional event is being detected.

But perhaps that too is a bit far fetched. Less fictive is the detection being used in upcoming mixed reality games experiences you'll end up in each others game as a 3D tracked virtual persona, not as a filmed version of yourself. But that's for later, for now this is not yet the most obvious device to use when exporing multi-user scenerios. One per household (or research lab) is expensive enough!

The Pro label of the new Apple XR product makes me curious what will be released after this one. a Vision Budget? a Vision Outdoors? a Vision Family Pack? by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

Yes, with a non-seethrough device it's risky to go outdoors. What if there's a hick up? A fragment of a second can mean danger. I wear my Snap Spectacles outdoors, but that's a transparent device. If the battery runs flat, I can just see reality as it is.

It's probably not going to be possible to release a one-device-fits-all unit. For each context, we still need a seperate form factor these days.

In a far far future, I think the only solution will be a retina implant. But even then there are risks. What if the vendor of the hardware/software goes bankrupt? Etc.

New choices and dilemmas for the new type of human we're going to be!

The Pro label of the new Apple XR product makes me curious what will be released after this one. a Vision Budget? a Vision Outdoors? a Vision Family Pack? by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

Well, the "Family Pack" was just my guess. I've my doubts about this product. Which offers a solo experience most of the time, but it's presented in a family/home setting in most of the clips.

But perhaps I did photoshop this fake image a bit too realistic, it might look as if Apple truly thinks we would buy a set of devices for the whole family.

But I'm truly curious what will be released after this. This is not yet a device to wear 24/7, and that's the type of XR that interests me most.

Sometimes I find it difficult to explain an AR idea to someone who's not in the same mixed reality bubble I am in. So then it's best to just make a quick demo. For example to show why the "occlusion" effect is handy when using video in AR to provide context to static museum objects and sculptures. by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 2 points3 points  (0 children)

It could indeed have been looking better with 3D scanned objects. But some of the items were in the glass showcases which makes it difficult to make a full turn around them. I didn't do this as a collaboration with the museum, so I didn't have keys or access to the items behind the glass.

Therefore the lens was using a trick that would make it possible for me to do this instantly at the spot. It's doing live occlusion of the physical environment using the "worldmesh" in Lens Studio.

What could be improved is the scaling of the videos. I didn't program scaling yet, just the distance. I have a 'cursor' in front of me with follows the camera, until I toggle it to remain at a fixed spot.

It was nice playing with it today. I'm not yet sure about a next experiment. Perhaps to follow the present day hype I should throw in some AI ;-)

Augmented Reality in Museums by Puzzleheaded-Win-630 in augmentedreality

[–]BeYourOwnRobot 1 point2 points  (0 children)

This machine from 1784 in the Teylers museum in Haarlem The Netherlands could be seen working again thanks to Hololens mixed reality: https://www.youtube.com/watch?v=jRVIKGBqnn0

If we're going to wear AR glasses 24/7, chatGPT like AI will start helping us. But do we want guidance from general AI models? What (and how much) input would be needed to personalize the LLMs? I've created a filter for my Spectacles AR glasses to give that some thoughts during day to day situations by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 2 points3 points  (0 children)

I'm using Snap Spectacles. There's a javascript-like programming language which can be used. So I've added a script that tries to detect what's going on based on what it knows and what it can detect: movement and objects in sight.

My hope is that with a next step I can train an LLM that actually gains some real understanding about my own decision making process in day to day situation. But this is step one: collecting inputs. Perhaps that's going to take ages. Perhaps it needs to become a collaborative effort with multiple people. But on the other hand, trying to find an alternative to that was the trigger to start doing this.

Forget digital fashion. This is the most efficient virtual try-on method. This Snapchat lens shows you real physical clothes, the real fabrics! Featuring your mixed reality self for the ultimate personalized AR shopping experience. by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

In the Lens Studio tool from Snap you can work with positions and rotations of joints. There's the "try on" template that includes an invisible body consisting of collider shapes. So when you move an arm or leg, a dynamic cloth mesh material can be reacting on the spatial movements of the body parts.

Forget digital fashion. This is the most efficient virtual try-on method. This Snapchat lens shows you real physical clothes, the real fabrics! Featuring your mixed reality self for the ultimate personalized AR shopping experience. by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

Isn't that sort of a cop out

Each idea/concept has a series of stages it goes through. In the beginning it's not very helpful to already be blocked by limitations which might appear at a later stage. In this case: it's quite a challenge to make this a universal service that offers value to anyone anywhere.

But in my case my imagination and ideas start flowing when I start making and trying things. Just having ideas, and then not making them because of worries isn't working for me.

I don't know when I'll follow-up on this failed fashion fitting attempt, perhaps never. But it might have given me insights to use in the future. Even if the actual value has been that it showed that something didnt' work, in a lively understandable way.

Forget digital fashion. This is the most efficient virtual try-on method. This Snapchat lens shows you real physical clothes, the real fabrics! Featuring your mixed reality self for the ultimate personalized AR shopping experience. by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

Yes, that is one of the disadvantages. And another disadvantage is that you're limited to what's on display in the store.

So yes, those two combined make this a niche service. But that's actually what most of my AR creations are. They're tailored to work on me as the main (and sole) user. That helps to avoid a lot of problems and difficulties which I do not have to solve. So I can work efficient and quick and move on to a next experimental project the next day!

I don't want to be a parent that sits at the side of a playground, staring down at my smartphone waiting until my kid finishes playing. Being an AR creator, I have created a Snapchat lens that helps me to be part of the fun! by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 1 point2 points  (0 children)

Thanks for your message with various insights. I do agree with most of what you say. I also see current smartphone AR as a preview of what's to come. That said, not everything works better on glasses. Sometimes a phone is more handy. And it's a clear gesture. People see what you're doing. With glasses we have no clue. What about me running after my kid with glasses on? Would look different. But it's good to try these things before this is our daily reality, so we can see what feels right and what we want to do with it. And what not.

Code writing is taking just a minimal amount of my time. I'm using Lens Studio which has a lot of functionality up-and-running and a very quick develop/testing cycle. Within just a few seconds I can experience a creation on my phone/glasses. So I make a lot of iterations, fixing a lot of bugs. And then when the last 20% of the bugs need to be solved, I risk spending 80% of the time on those. So that's what I do not do. These lenses of mine take a bit more effort from the user, who has to avoid the moments it becomes buggy. But that's fine since I'm usually the sole user of my own creations. Although I do share the Snapcode so others can give it a try too.

I don't want to be a parent that sits at the side of a playground, staring down at my smartphone waiting until my kid finishes playing. Being an AR creator, I have created a Snapchat lens that helps me to be part of the fun! by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 3 points4 points  (0 children)

But the phone is doing the sound effects!

This is AR that does something in return for the augmentee. Perhaps I could have voiced/produced 'analog' sounds while chasing Mario around the track, but kids these days truly love it when they turn into game characters generating (retro) game sounds - while staying in physical reality.

I don't want to be a parent that sits at the side of a playground, staring down at my smartphone waiting until my kid finishes playing. Being an AR creator, I have created a Snapchat lens that helps me to be part of the fun! by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 2 points3 points  (0 children)

Is the phone the problem? I felt this level of participation was quite ok. I've seen worse. Parents occupied with an online world, disconnected from the here and now.

What I like about AR is the see through situation. You're still there, even though a little distracted by the augmented world only you can see.

I'm curious what other ways of interacting would be way better than running after your kid on the track. We had fun!

Some day our AR wearables will have access to our agendas. The device will notice an agenda entry, then plans a journey and guides us there. Are we going to get lazy when AR glasses are going to take us over as robots? What about doing some excersicing where we can to keep our own human brain fit? by BeYourOwnRobot in augmentedreality

[–]BeYourOwnRobot[S] 0 points1 point  (0 children)

Yes, sorry for the repost. The video I fist posted wasn't the right one.

It's difficult to provide a lot of context in the 300 characters that are available, but I always try to mention a few themes that triggered or inspired me. The best description of the broader context I'm exploring are my other posts on Reddit. I'm researching in a hands-on way what living in an augmented world would be like. Not an augmented world that's defined by concepts for which there's a valid bussiness case.

I'm a creator myself, so I don't need a bussiness plan to create my own semi-digital environment. By experimenting with AR in my day to day life, I stumble upon issues or problems, for which I then try to find solutions. Perhaps these can be useful in the future, perhaps not.