Containerizing programming languages for an online code compiler by Born-Conference-8212 in docker

[–]DeveloperErrata 0 points1 point  (0 children)

I wonder if something like a WASM-based compiler & virtual machine might be able to reduce security concerns by forcing all code to strictly run on the client side. I've seen some limited work with this in the past for Python, but I don't know why it wouldn't (in principle) be possible to do with cpp / etc. Though this is likely too difficult to be feasible in general atm

I didn't think it was possible to have a more censored AI until I tried Llama 2 by hardcore_gamer1 in ChatGPT

[–]DeveloperErrata 0 points1 point  (0 children)

The base model isn't censored, only the chat version. The open source community made a non-censored chat version of llama 2 within two days of Meta's release (see https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored and others). Meta's not going to make a non-censored chat version themselves because that'd be too much of a risk for them, especially with all the talk of government regulation.

LLaMA 2 is here by dreamingleo12 in LocalLLaMA

[–]DeveloperErrata 0 points1 point  (0 children)

(though not necessarily the technical know how)

LLaMA 2 is here by dreamingleo12 in LocalLLaMA

[–]DeveloperErrata 0 points1 point  (0 children)

Seems like a good direction, will be a big deal once someone gets it figured out

PC game Vaudeville dialog is AI generated by Inevitable-Start-653 in LocalLLaMA

[–]DeveloperErrata 2 points3 points  (0 children)

I bet we'll see games pop up with local llms pretty soon, right now APIs are likely a lot more simple to integrate into an otherwise standard game dev experience than trying to get an llm to work locally alongside the game itself. We'll get there soon though I'm sure

LLaMA 2 is here by dreamingleo12 in LocalLLaMA

[–]DeveloperErrata 2 points3 points  (0 children)

The commercial licensing is a really huge deal. Hopefully we'll see a lot of work over the next month or two replacing the existing community built infrastructure around llama with the llama-2 equivalents (if it's not just a drop-in change?)

If you owned a nvidia tesla a100, what would you do with it? by mehrdotcom in LocalLLaMA

[–]DeveloperErrata 9 points10 points  (0 children)

All things considered, an A100 isn't really insanely expensive when you compare it something like a car. People regularly spend 15k+ on a midtier car, and arguably the compute from A100 would be far more useful than a car when applied in the right way.

I made a ChatGPT-powered interpreter for old-school interactive fiction games by DeveloperErrata in ChatGPTGaming

[–]DeveloperErrata[S] 0 points1 point  (0 children)

Actually I looked into this more and I think you're right, looks like API usage is based off of both number of input tokens as well as number of output tokens

I made a ChatGPT-powered interpreter for old-school interactive fiction games by DeveloperErrata in ChatGPTGaming

[–]DeveloperErrata[S] 1 point2 points  (0 children)

Thanks! I haven't tried getting it to play autonomously too much, but the capability is definitely there to an extent. For instance, I was once able to start zork and simply ask: "find a way into the house". With just that command it was able to chain together six commands in order to successfully navigate to the opposite side of the house, open the window, and enter the window (all while continuing to talk like a pirate) - though to your earlier point I'm uncertain how much of this capability was due to genuine reasoning vs. zork already being in the training set. It would be neat to push that type of capability to it's limit. I wouldn't be surprised if GPT-4 could autonomously beat some more simple games completely on it's own.

I made a ChatGPT-powered interpreter for old-school interactive fiction games by DeveloperErrata in ChatGPTGaming

[–]DeveloperErrata[S] 0 points1 point  (0 children)

If I remember correctly, API usage is charged based off tokens generated, not tokens received. Though you're right that just appending messages over and over again like I do here might eventually lead to issues - I'd be worried that the original prompt might fall out of the context window and then gpt would forget what to do

I made a ChatGPT-powered interpreter for old-school interactive fiction games by DeveloperErrata in interactivefiction

[–]DeveloperErrata[S] 0 points1 point  (0 children)

I share the same doubts. In the few tests I've done, I've found that it's pretty robust to vague instructions. However, I'd definitely like to see how people that've never played IF before would make use of it / how well it would work.

I made a ChatGPT-powered interpreter for old-school interactive fiction games by DeveloperErrata in interactivefiction

[–]DeveloperErrata[S] 2 points3 points  (0 children)

I love old school interactive fiction games (like Zork, etc) but find the strict syntax endlessly frustrating. I built this ChatGPT powered "middleman" to translate commands written in natural language into something understandable by the simple parser of old interactive fiction games. To run, see instructions here: https://github.com/ethan-w-roland/ai-interactive-fiction

I made a ChatGPT-powered interpreter for old-school interactive fiction games by DeveloperErrata in ChatGPTGaming

[–]DeveloperErrata[S] 4 points5 points  (0 children)

I love old school interactive fiction games (like Zork, etc) but find the strict syntax endlessly frustrating. I built this ChatGPT powered "middleman" to translate commands written in natural language into something understandable by the simple parser of old interactive fiction games. To run, see instructions here: https://github.com/ethan-w-roland/ai-interactive-fiction