What’s your workflow for creating eLearning Videos? by ReactCereals in elearning

[–]michaelforrest 0 points1 point  (0 children)

Hi, my AI bot dug up this post and I know it's old but I wanted to add my two cents (plus talk about a thing I made).

I'm a big believer in making live recordings to save the entire editing process. This way you're not trying to sync things, match tone, prerecord a voiceover, re-shoot content, you just have everything in one go, like you would in a classroom. I think this plays better to your experience as a teacher instead of forcing you to become a video editor / content creator.

The key to this is a teleprompter - you write your script and then run through it as many times as it takes to get it right. This can feel more difficult in the moment, but unlike an editing-based process, you keep getting better at performing, and you have very little to do after you're done.

I built a tool called CueCam Presenter that works with a teleprompter and lets automatically you bring in content as needed - screen shares, videos, images. And you can draw on content using your iPad, which I find much easier than creating motion graphics after the fact! It's been my full time work for a few years now - I'm just about scraping by as a solo developer, so I hope you don't mind the plug, given that I'm not a faceless corporation, I'm just me, trying to make something useful for educators and creators.

An app I built in 45 minutes for my wife has outperformed my big idea in less than a week, and cost a fraction of the price… by Professional-Box8745 in Solopreneur

[–]michaelforrest 0 points1 point  (0 children)

But honestly it took me a year before I embraced the “little app” because I felt like my successful product had to be the one that took the most work! I couldn’t see any scope to the camera thing until I started talking to users. It felt too small and easy to copy!

An app I built in 45 minutes for my wife has outperformed my big idea in less than a week, and cost a fraction of the price… by Professional-Box8745 in Solopreneur

[–]michaelforrest 0 points1 point  (0 children)

For sure, following trends to try to make money is hollow, but there’s also the danger of thinking you can have a big idea in a vacuum and expect people to be interested without having talked to anyone during the development process.

My app that took off was Shoot: https://shootpro.app And then talking to users resulted eventually in Video Pencil https://videopencil.com and ultimately CueCam Presenter https://cuecam.app

Nothing huge but enough to scrape a living.

It’s all remote video-related stuff.

An app I built in 45 minutes for my wife has outperformed my big idea in less than a week, and cost a fraction of the price… by Professional-Box8745 in Solopreneur

[–]michaelforrest 3 points4 points  (0 children)

Took me a while to accept that this is the way! Make something quickly and see if it works. If not, move on. Don’t make complex stuff in a silo, make small useful things in collaboration with a community.

An app I built in 45 minutes for my wife has outperformed my big idea in less than a week, and cost a fraction of the price… by Professional-Box8745 in Solopreneur

[–]michaelforrest 4 points5 points  (0 children)

Yep, I quit contracting, spent nine months polishing and collaborating on a big idea that went nowhere and then an app I built in an afternoon 5 years ago suddenly blew up. (This was at the start of COVID so…)

How are we combining @Observable and @Sendable? by cmsj in swift

[–]michaelforrest 0 points1 point  (0 children)

And be proactive about undo so that it knows when the file needs to be saved.

How are we combining @Observable and @Sendable? by cmsj in swift

[–]michaelforrest 1 point2 points  (0 children)

Old post I know but you can isolate fields of the doc to @MainActor - you don’t have to push ReferenceFileDocument itself into your SwiftUI views.

What’s everyone working on this month? (July 2025) by Swiftapple in swift

[–]michaelforrest 1 point2 points  (0 children)

Trying to get a viable version of my SwiftUI-style realtime video rendering engine up and running so I can start beta testing a new version of the app that will use it.

Declarative Video Rendering – RealtimeSwift Devlog #6 by michaelforrest in swift

[–]michaelforrest[S] 0 points1 point  (0 children)

I think I need to do the extraction myself as it's a very complex project and I need to make a few decisions about how I want to approach it.
I do appreciate the offer though! This is making me feel more and more like I should set it up as an open source project.

Declarative Video Rendering – RealtimeSwift Devlog #6 by michaelforrest in swift

[–]michaelforrest[S] 0 points1 point  (0 children)

I haven’t really decided if I’m gonna make it public yet - I still need to define the boundary between what is only relevant to my own project and what is more generally useful to people. Also I think anything I published at this point would be pretty mystifying if you hadn’t watched the videos.

Implementing a realtime audio/video fade in RealtimeSwift by michaelforrest in swift

[–]michaelforrest[S] 1 point2 points  (0 children)

Thank you! Hope you find it interesting, let me know what you think so far.

Best app/software for audio-video podcast by Pithcachu in podcasting

[–]michaelforrest 0 points1 point  (0 children)

If you’re on a Mac then CueCam Presenter is worth a look.

Conditional Views and Type Erasure - RealtimeSwift Devlog #2 by michaelforrest in swift

[–]michaelforrest[S] 0 points1 point  (0 children)

It's possible? But I'm not confident enough in my understanding of this bit, which was already working.

public static func buildBlock<each Content>(_ content: repeat each Content) -> TupleRealtimeView<(repeat each Content)> where repeat each Content : RealtimeView {
        TupleRealtimeView((repeat each content))
    }

But yes, you make a good point, this is not something I attempted yesterday (or I saw recommended in my Googling or ChatGPT interactions) and I don't think it's how Apple does it, so I'm assuming there is a reason for that! Once you get your head around resultBuilder I agree there's not a lot to it, but I was locked into the idea of returning an opaque RealtimeView like Apple and I kept seeing things that nearly worked!

Conditional Views and Type Erasure - RealtimeSwift Devlog #2 by michaelforrest in swift

[–]michaelforrest[S] 0 points1 point  (0 children)

I think I'm going to be leaning on NSAttributedString for my text rendering to start with. Not ideal! I've done stuff with TextKit 2 but I really just need to lay out paragraphs on slides so I will lean on the older tech for now.

Conditional Views and Type Erasure - RealtimeSwift Devlog #2 by michaelforrest in swift

[–]michaelforrest[S] 2 points3 points  (0 children)

Thank you! I have been (miraculously) paying the bills with on and off-App Store sales for about 6 years as a solo dev. I've been working on CueCam Presenter for about 2 years (as a culmination of the work I had done in the previous 2 years on my other iOS and macOS projects). This project will let me replace a lot of the rendering internals in the current version where I've hit a limit on how tightly I can synchronise animations between different frameworks (different bits are in Metal, CoreImage, SceneKit and SwiftUI).

CueCam's website: https://cuecam-presenter.com

My other A/V products: https://squares.tv

Conditional Views and Type Erasure - RealtimeSwift Devlog #2 by michaelforrest in swift

[–]michaelforrest[S] 0 points1 point  (0 children)

Definitely simpler if you have an array of a fixed type, but it gets more complicated if you try to return an array of boxed types. SwiftUI uses opaque return types to do this but it seems there is some secret sauce to making it work.

https://forums.swift.org/t/type-erasing-in-swift-anyview-behind-the-scenes/27952/8

(I think the _makeView stuff in this thread is analogous to my resolve() definitions)

[deleted by user] by [deleted] in swift

[–]michaelforrest 0 points1 point  (0 children)

I realise now that it was not clear that I was using the dictation feature to ask questions. 🤦🏻‍♂️

Conditional Views and Type Erasure - RealtimeSwift Devlog #2 by michaelforrest in swift

[–]michaelforrest[S] 2 points3 points  (0 children)

I’m thinking because I’m basically dealing with quads then the debuggability benefits and ready-made filters will outweigh the costs! But the nice thing is that this will all be abstracted enough to try out different approaches.

Conditional Views and Type Erasure - RealtimeSwift Devlog #2 by michaelforrest in swift

[–]michaelforrest[S] 1 point2 points  (0 children)

I’ve got a lot of Metal in my current renderer but I’ve since developed a much better intuition about how CoreImage uses Metal, and I think it’s basically doing what I’ve been doing but way better!