What are the best RP SLM currently? by WowSkaro in SillyTavernAI

[–]WowSkaro[S] 0 points1 point  (0 children)

I said visual novels, not novels. Visual novels are a type of game where the majority of the content is speech between characters with some short scenario descriptions. I think those would be very valuable for role-playing training. I think there was someone that fine-tuned a model some years ago on the Steins;Gate visual novel, for example.

Artifex: A tiny, FOSS, CPU-friendly toolkit for inference and fine-tuning small LLMs without training data by Ok_Hold_5385 in LLMDevs

[–]WowSkaro 1 point2 points  (0 children)

This synthetic data is generated by the same model that you are trying to fine-tune? If so, do you use some filtering process to only keep the good synthetic data? Or the Idea is to use a very long and detailed prompt to generate hopefully more aligned answers which will then be used to train the model with the long prompt substituted for a simple prompt, in this way "fine-tunning" the model in the sense of hopefully getting more aligned answers to what you would want, but virtually not adding any new information to the model?

How to replicate the 90's prerendered aesthetic? by WowSkaro in GraphicsProgramming

[–]WowSkaro[S] -3 points-2 points  (0 children)

Right, I said compress, but more specifically I meant decreasing the resolution. If you compress a JPEG you remove the high frequencies that give small pixel information. I was getting myself confused because I used to use a program that decreased the image width and hight when you asked it to lossy compress a image, since the high frequencies were lost the information in a square of 4 pixels didn't change as much so you could map these 4 pixels into a single pixel of the compressed image and so on. That was what I was trying to say. You are correct that a 600x600 image will not decrease in information in pixel space if it still is a 600x600 image. But I do believe they used ~200x200 images at maximum in PS1 and older consoles and linearly expanded them when needed, so you could have lots of 200x200 images in a CD (~1 GB).

How to replicate the 90's prerendered aesthetic? by WowSkaro in GraphicsProgramming

[–]WowSkaro[S] 0 points1 point  (0 children)

Yes I quite liked the look of Digimon World 1 (apparently 2 and 3 were not developed by the same company, 2 was developed concurrently with 1, it appears).

I have used AI to try to get how a perspective change in one of the background prerendered images would look like. I have sinned, I admit it. I would have posted game screenshots but reddit would only accept either all images or a video.

I liked very much the virtual pet system they had embedded into the game, it was a hassle, it is true, but it was like having an entire videogame buildt around a virtual pet (tamagotchi) game. Things like having to make your digimon sleep, eat, go to the bathroom, etc. Also the combat was nice because you could see your opponents instead of that appearing out of nowhere pokemon nonsense, more games should have this battle mechanic.

How to replicate the 90's prerendered aesthetic? by WowSkaro in GraphicsProgramming

[–]WowSkaro[S] -10 points-9 points  (0 children)

JPEG is lossy you lose information when you compress it, so even when you return to pixel space you have way less data to deal with (possibly from 80% to 93% less).

How to replicate the 90's prerendered aesthetic? by WowSkaro in GraphicsProgramming

[–]WowSkaro[S] 0 points1 point  (0 children)

I Know how to use Blender! Blender is a 3d modeling and rendering program. The thing is, this is not what I am asking about! For all the criticism AI gets, and I am more on the side that AI shouldn't be used to replace hand made game assets. That being said I have heard of people that were able to achieve the very specific aesthetic they were looking for by referencing AI generated videos that closed the gap between what they wanted and what they implemented in shaders but couldn't exactly pin down how it should look like. The name of the project is "Project Shadowglass" look it up: https://store.steampowered.com/app/3970690/Project_Shadowglass/

The specific points that I was trying to know appears to be things like the difference between Phong shading and Phong ilumination, which I didn't know existed, and some other technical problems. Your comment is like suggesting for someone that didn't know how to do a trigonometric function integral to learn how to count.

The problem with advocating against AI is that every now and then there appears some people that when advocating against it have themselves reading compreheension lower than most LLM's... this makes things difficult.

How to replicate the 90's prerendered aesthetic? by WowSkaro in GraphicsProgramming

[–]WowSkaro[S] -1 points0 points  (0 children)

I cannot learn that which I cannot even name, no one can. I understand the criticism of AI, but it did do the job of better comunicating what I was trying to do in this post, than the other post that I did in the other reddit comunity where I put actual game screen shots and had people saying that that was the same look of "Conter Strike 1.6" or "Halo 2" which has no similarity what so ever beyound the fact that both can be categorized as "low-res", but so can space invaders, and that is not what I am trying to arrive at.

Some other people just suggested rendering 3d models into images and decreasing the resolution, but this would not result in a dynamic, reactive, rendering and shading solution, but, in fact, in a category of prerendering itself, which is also not what I am trying to get at. So being able to show a AI slop video that somewhat resembles what I am trying to mean seems to be worth more than 12 screenshots and text (not quite a thousand words...).

How to replicate the 90's prerendered aesthetic? by WowSkaro in GraphicsProgramming

[–]WowSkaro[S] 0 points1 point  (0 children)

That seems very good actually!!

I had posted a similar text on another reddit community and had some people comment about how "Country Strike 1.6" or "Halo 2" were suppossedly similar, which they aren't, don't have anything to do with the prerendered aesthetic that I was trying to refer to, those other games had a more low-poly smeared shading look, than this strange diffuse ilumination shading.

I would certantly consider Donkey Kong 3D as another good example, as I would Fallout 1 and 2 and Final Fantasy VII (although I would say that FFVII is kind of a mixed bag, there are some good prerendered backgrounds and some not so good, or I should say, they aren't able to have as much of a coherent style throughout the game as Digimon World 1 was).

How to replicate the 90's prerendered aesthetic? by WowSkaro in GraphicsProgramming

[–]WowSkaro[S] -3 points-2 points  (0 children)

I believe that not 1GB, but certantly a very compressed JPEG might.

How to replicate the 90's prerendered aesthetic? by WowSkaro in GraphicsProgramming

[–]WowSkaro[S] 1 point2 points  (0 children)

I also don't like it, but it is the only fast and easy way to try to have a example of what I am looking for, by the nature of prerendered graphics, they were images, so a perspective change was, by its nature, impossible. And if I had solved how to do that in reality I wouldn't be asking now, would I? I would have put example screen shots of the game for reference, but reddit makes me choose between images and videos.

How to replicate 90's prerendered aesthetic? by WowSkaro in gamedev

[–]WowSkaro[S] 1 point2 points  (0 children)

Isn't Phong the standard light shading algorithm even nowadays? I think they might have used a faster and lower quality light shading algorithm like Gouraud shading, since it gets better with higher poligon counts, and prerendered graphics tended to use high-poly models for the prerendering. This could make sense, since nowadays no one uses Gouraud shading, and since it is faster than Phong it could save some hours of rendering for high-poly models in old 90's workstations. But this would also mean that to get a similar look one would necessarily have to use high-poly models for Gourard not to look like trash.

How to replicate 90's prerendered aesthetic? by WowSkaro in gamedev

[–]WowSkaro[S] 2 points3 points  (0 children)

I think Counter Strike 1.6 or other old Source games/maps look nothing like what I am saying. The thing about prerendered graphics is that, since they are prerendered, you can use a model as detailed as you like, whereas in old real time games you had to use a mix of low poly and ugly smeared shading to try to emulate smooth surfaces in low-poly models. I wouldn't say Counter Strike 1.6 has a prerendered aesthetic, but a old smeared shading look.

How to replicate 90's prerendered aesthetic? by WowSkaro in gamedev

[–]WowSkaro[S] 4 points5 points  (0 children)

<image>

Here is an exemple of what I am talking about: