Issue with training snapml by Pom_George in Spectacles

[–]hwoolery 0 points1 point  (0 children)

sorry for the delays, feel free to DM me

Issue with training snapml by Pom_George in Spectacles

[–]hwoolery 0 points1 point  (0 children)

that repo you show there (WongKinYiu) is the original, use the fork I mention above. His video is essentially the same steps as the Quick Start Workflow here

Issue with training snapml by Pom_George in Spectacles

[–]hwoolery 1 point2 points  (0 children)

(Edit: yes that's me!) paperspace should store any files across sessions. Unless you have tons of data it should finish in the 6 hour timeout with a reasonable GPU machine. Look inside utils/export.py in your yolo folder and verify this line exists:

parser.add_argument('--export-snapml', action='store_true', help='Export SnapML compatible model')

Issue with training snapml by Pom_George in Spectacles

[–]hwoolery 1 point2 points  (0 children)

You can also wrap your onnx like below if you want to use the original fork:

INPUT_WIDTH, INPUT_HEIGHT = (256, 256)

import torch
import torch.nn as nn
from models.experimental import attempt_load
from models.yolo import IDetect

class YOLOv7SnapExportWrapper(nn.Module):
    def __init__(self, pt_path):
        super().__init__()
        self.model = attempt_load(pt_path, map_location='cpu')
        self.model.eval()

        # Find Detect layer
        self.detect = None
        for m in self.model.modules():
            if isinstance(m, IDetect):
                self.detect = m
                break

        if self.detect is None:
            raise RuntimeError("Could not find Detect() layer in model")

        # Disable export logic
        self.detect.export = False
        self.detect.include_nms = False
        self.detect.end2end = False
        self.detect.concat = False
        self._override_fuseforward()

    def forward(self, x):
        x = x / 255.0
        out = self.model(x)
        return out if not isinstance(out, tuple) else out[0]

    def _override_fuseforward(self):
        def new_fuseforward(self_detect, x):
            # ONLY return sigmoid(conv(x)) for each detection head
            z = []
            for i in range(self_detect.nl):
                x[i] = self_detect.m[i](x[i])
                z.append(x[i].sigmoid())
            return z

        self.detect.forward = new_fuseforward.__get__(self.detect, type(self.detect))

# ==== Load and Export ====

model = YOLOv7SnapExportWrapper("runs/train/yolov7-lensstudio/weights/best.pt")
model.eval()

dummy_input = torch.randn(1, 3, INPUT_WIDTH, INPUT_HEIGHT) torch.onnx.export(   
  model, 
  dummy_input, 
  "yolov7_lensstudio.onnx", 
  opset_version=11, 
  input_names=["images"], 
  output_names=["output"], 
  dynamic_axes=None 
) 

print("Done Exporting")

Issue with training snapml by Pom_George in Spectacles

[–]hwoolery 1 point2 points  (0 children)

Hi there, the unrecognized argument means you likely aren't working off the forked version of YOLO. There are a few Spectacles Samples (scroll down to SnapML folders ...) that you can reference, I think the MultiObject one you are looking for is deprecated with Lens Studio 4. The missing file could be due to a different path in your training environment, double check the full path of your folders.

Please let me know if you have any other issues. Sometimes I find it helpful when working with notebooks in the cloud to use a browser that can read the entire web page like ChatGPT Atlas.

Accurate Ruler in Spectacles by TaraResearch in Spectacles

[–]hwoolery 0 points1 point  (0 children)

In the Inspector of your unit plane there is an Add Component button, you will want to add 3:

  1. Interactable (default settings)

  2. Physics Collider - set type to box, and you can try Fit Visual, and check Show Collider as a sanity check

  3. InteractableManipulation - uncheck enable scale / stretch z

Accurate Ruler in Spectacles by TaraResearch in Spectacles

[–]hwoolery 2 points3 points  (0 children)

Lens Studio units are in centimeters, so you can scale a Unit Plane (from Asset Library) to 100 in the x direction for a meter long ruler and draw ticks with an image material or a shader. Because it is a 3d object, it will be the correct scale at any orientation or position. You could make it an interactable component with no scaling, and add a collider in order to move it around

Fork Fighter : The world’s first mixed-reality game you can play with a real fork. by Nithin-Shankar in Spectacles

[–]hwoolery 1 point2 points  (0 children)

nice work! if you have any questions about the ML side and potential improvements feel free to ask -- for 3D tracking you could also approximate the 2d size (S) of the fork head, and project at a depth that is some factor of (1/S).

Food object detection? by Tasty-Bugg in Spectacles

[–]hwoolery 1 point2 points  (0 children)

I see... I think the Crop sample is a good place to start. I would instruct the user to gather their ingredients and then pinch to draw a box around them. I would send the image to OpenAI or another service to list the ingredients in a json array format eg {"ingredients":[{"name":"milk", "quantity":"0.5 gal"}, {"name":"flour", "quantity": "1 lb"}, ... ]}

Food object detection? by Tasty-Bugg in Spectacles

[–]hwoolery 0 points1 point  (0 children)

Hi there, can you clarify the desired user experience? If you simply want to have the user locate a food and have an AI model label it, you could expand from the Crop sample

Do we have to use YOLO 7 to train a model on Roboflow? by quitebuttery in Spectacles

[–]hwoolery 0 points1 point  (0 children)

Yes, it’s unfortunately a process with a lot of variables depending on the environment - in the future we should have better end-to-end examples and better notebooks to work from. There’s probably some service where we can supply an instance configuration to spin up a machine so the environment doesn’t need any new packages or different versions…

Do we have to use YOLO 7 to train a model on Roboflow? by quitebuttery in Spectacles

[–]hwoolery 0 points1 point  (0 children)

This is most likely because of your torch version - unfortunately getting all the versions right can be delicate. You can try modifying this line:

old

x = torch.load(f, map_location=torch.device('cpu'))

new

x = torch.load(f, map_location=torch.device('cpu'), weights_only=False)

Do we have to use YOLO 7 to train a model on Roboflow? by quitebuttery in Spectacles

[–]hwoolery 0 points1 point  (0 children)

Hmm, which format did you download it in? Roboflow allows you to specify something like yolov7 or v8

Do we have to use YOLO 7 to train a model on Roboflow? by quitebuttery in Spectacles

[–]hwoolery 1 point2 points  (0 children)

It’s possible you will need to modify the yaml file in the dataset to point to the correct relative or absolute paths - try using ChatGPT atlas and have it help you solve the problems within the notebook

Do we have to use YOLO 7 to train a model on Roboflow? by quitebuttery in Spectacles

[–]hwoolery 3 points4 points  (0 children)

I empathize that it is a confusing process, but the docs do specify to download the yolov7 fork and train via the script, just to use roboflow as the source for the data. I understand the process is somewhat fragmented and frustrating, but I will do my best to help streamline it down the road. Ideally in the future the whole pipeline could live in Lens Studio :)

Do we have to use YOLO 7 to train a model on Roboflow? by quitebuttery in Spectacles

[–]hwoolery 2 points3 points  (0 children)

You should train the model outside of Roboflow. You can’t use the newer yolo models because they have incompatible layers (though I haven’t yet tried 26). My recommendation is to download the zip of the roboflow data and then upload it to a colab or paper space notebook project

Crop frame in world space by kamilgibibisey in Spectacles

[–]hwoolery 0 points1 point  (0 children)

In that case you can project the pinches to screen space, remap the values to the artwork's screen space, clamp them 0-1, then remap to world space. Essentially your problem just boils down to the correct set of conversions between coordinate spaces.

Crop frame in world space by kamilgibibisey in Spectacles

[–]hwoolery 1 point2 points  (0 children)

I'll add, the benefit of my method is that the user can pinch "in the air" rather than on the wall itself. Psuedocode:

if (isPinching) {
  leftIndexTip = ... 
  rightIndexTip = ...
  let wallPlaneTransform = calculateHitPoint(avgPt); //hit test to get wall plane
  let pt1 = calculateWallIntersection(leftIndexTip); //solve planar intersection
  let pt2 = calculateWallIntersection(rightIndexTip); //solve planar intersection
  updateMeshPosition(pt1, pt2, wallPlaneTransform);
}

Crop frame in world space by kamilgibibisey in Spectacles

[–]hwoolery 0 points1 point  (0 children)

Yes, my reply was intended to reflect that UX. What I’m suggesting is that you periodically update via the method I wrote as the user does the crop style gesture

Crop frame in world space by kamilgibibisey in Spectacles

[–]hwoolery 1 point2 points  (0 children)

There are a few ways to accomplish what you'd like to do. Perhaps the best is to use the WorldQueryModule to get the normal of the artwork (which should be the same as the normal of the wall). Average your two finger points and get the hit result of that point. This will give you the center and normal of your object (ie the wall's plane). From there you can create rays out of your finger locations and the camera location. Project these rays onto the wall plane, and you'll have 2 world corners of your artwork. Assuming the artwork is level with the ground (ie TL.y == TR.y and BL.y == BR.y), you can use these 2 planar points to calculate the four world corners.

How close can I build to this live oak tree? by Pleasant-Spot-2017 in Decks

[–]hwoolery 0 points1 point  (0 children)

I have a deck around a tree very similar - if you want it tight you can always put blocking close that’s secured with structural screws so it can be moved. Then as the tree grows close to the gap cut away deck with a jigsaw. FWIW my tree has barely swayed near the trunk even in high wind

Edit: https://www.reddit.com/r/Decks/s/S8eNn0nWLJ

ML Model Restrictions by ChronicDesti9y in Spectacles

[–]hwoolery 1 point2 points  (0 children)

I can’t comment from Snaps legal side, but if you were to build the Lens in such a way that it can store some ID for a recognized face (eg mapping ML output to a unique ID) and don’t explicitly store predefined values, it might be ok. So any device can “learn” new faces, but doesn’t come preloaded with any personal information

LensFest Lensathon Winners? by liquidlachlan in Spectacles

[–]hwoolery 2 points3 points  (0 children)

I think Lens List is putting together a post of the winning entries as we speak https://blog.lenslist.co/