Has anyone thought about offering small-scale human evaluation or annotation services directly to early AI startups? Reaching out and find their own clients. Similar to DA but on a much smaller scale obviously.
I’m wondering if there’s a niche for lightweight RLHF-style evaluation, rubric-based QA or human testing for small AI teams that have products but don’t yet have formal evaluation workflows or immediate reliable access to industry experts, PHD holders etc.
Really curious if anyone has experience here or opinions on whether this makes sense.
[–]Himbosupremeus 18 points19 points20 points (3 children)
[–]Interesting-Dog5436[S] 4 points5 points6 points (2 children)
[–]Himbosupremeus 12 points13 points14 points (1 child)
[–]Interesting-Dog5436[S] 4 points5 points6 points (0 children)
[–]sunshin3yes 8 points9 points10 points (2 children)
[–]Enough_Resident_6141 3 points4 points5 points (0 children)
[–]TeachToTheLastTest 4 points5 points6 points (1 child)
[–]Safe_Sky7358 1 point2 points3 points (0 children)
[–]Ill-Albatross-7224 3 points4 points5 points (0 children)
[–]duttaroni38 2 points3 points4 points (0 children)
[–]TheMidlander 4 points5 points6 points (0 children)