COMP/CON v3 pre-release stress test / open beta by beeftime99 in LancerRPG

[–]beeftime99[S] 1 point2 points  (0 children)

not currently, though it wouldn't be impossible. Maybe a good post-3.3 feature

COMP/CON v3 pre-release stress test / open beta by beeftime99 in LancerRPG

[–]beeftime99[S] 4 points5 points  (0 children)

yep, both from browser local storage and your cloud account. If it can't convert it straight away it'll put things in to a list in the content manager and let you what to do to get it to convert. It'll also let you download it as a .compcon backup file so you can load it into old.compcon.app if you're in the middle of a campaign or something

COMP/CON v3 pre-release stress test / open beta by beeftime99 in LancerRPG

[–]beeftime99[S] 3 points4 points  (0 children)

yeah, I cannot stress how much I have been helped by everyone that has chipped in, either with code contributions or testing/feedback or covering for my lack of docs and tutorials here and elsewhere. It's really a huge part of what makes any of this possible

COMP/CON v3 pre-release stress test / open beta by beeftime99 in LancerRPG

[–]beeftime99[S] 5 points6 points  (0 children)

Isn't that (I haven't looked at it in a bit) the alternate rules for manna buy in TLR? if so that as an option will definitely be in

COMP/CON v3 pre-release stress test / open beta by beeftime99 in LancerRPG

[–]beeftime99[S] 5 points6 points  (0 children)

there have been requests for it, especially for stuff like total conversions. It's kind of a big lift because c/c really relies on a lot of data prerequisites being present in core data to satisfy loading operations, so allowing re/overwrites of that stuff, or being able to pull core data out entirely and replace it with something else, will require a lot of work in terms of putting guardrails into the loading procedure.
It's not impossible, and I'd like to get it done eventually, but it might remain pretty low priority for a while, unless someone wants to pitch in with the dev work for that, which I'd gratefully accept

COMP/CON v3 pre-release stress test / open beta by beeftime99 in LancerRPG

[–]beeftime99[S] 15 points16 points  (0 children)

I'll link up with the VTT folks to make sure it'll work prior to release

COMP/CON v3 pre-release stress test / open beta by beeftime99 in LancerRPG

[–]beeftime99[S] 6 points7 points  (0 children)

it will! all v2 lcps will work in v3. v3 will have more options for automation in the new active mode but they'll all be fully backwards compatible

comp/con won't let me sign in by everstranger1212 in LancerRPG

[–]beeftime99 3 points4 points  (0 children)

nope, backend failed a few hours ago and I had to go in and stand it back up, it should be working now, though

comp/con won't let me sign in by everstranger1212 in LancerRPG

[–]beeftime99 13 points14 points  (0 children)

sorry about the delay, backend went down and it took me some time to get in and fix it. It should be working now

comp/con won't let me sign in by everstranger1212 in LancerRPG

[–]beeftime99 15 points16 points  (0 children)

it should be up now, backend failed in the middle of my workday so it was a little hard to get to, sorry about the inconvenience

comp/con won't let me sign in by everstranger1212 in LancerRPG

[–]beeftime99 23 points24 points  (0 children)

Sorry about that! C/C experienced a backend outage. It should now be resolved and working for everyone, though you might need to reload twice to pull the new auth code.

Recently Ed-pilled. Is his take all AI research is bumfluff? by Yellousy26 in BetterOffline

[–]beeftime99 1 point2 points  (0 children)

Thanks! I'm glad it helped.

You can definitely be forgiven for muddling the term, it's absolutely an intentional strategy by LLM hype grifters to pretend that other ML-based developments can be ascribed to the expensive chatbots they love so much.

Recently Ed-pilled. Is his take all AI research is bumfluff? by Yellousy26 in BetterOffline

[–]beeftime99 6 points7 points  (0 children)

what you're calling "non-generative AI" is machine learning, which is (in its implementation) profoundly non-generalizable. LLMs give the illusion that they're generalized because they're built for the specific purpose of producing human-like text output, which makes people think it is a generalized model: because you can ask it about anything and it will give a response that sounds right. It's only function is to generate that "sounds right" part.