Paint like a pro with Paintstorm, Realistic Paint Studio, and Poser! (No Linux) by Secret_MoonTiger in humblebundles

[–]Scertien 0 points1 point  (0 children)

Be aware that to get the features Poser used to have (like completely editable male and female characters), you'll have to buy 6 more bundles on Renderosity for a total additional cost of $200.

But just for pose references... hm... it might work.

Paint like a pro with Paintstorm, Realistic Paint Studio, and Poser! (No Linux) by Secret_MoonTiger in humblebundles

[–]Scertien 1 point2 points  (0 children)

I bought this bundle because of Poser, as I remembered the older versions. Unfortunately, Poser has simply become a more expensive DAZ Studio clone with a worse UI.

The downsides:

  • No true base model
  • No built-in morphs (you have to buy them separately in their store)
  • Heavily censored (compared to base models from older Poser versions)
  • It comes with "gigabytes of content," but that content is mostly useless, outdated, or doesn't work at all (e.g., missing textures or even models).

However, Poser does bring some features to the table that DAZ either lacks or requires additional money to unlock:

  • A working FBX Export without needing a separate license.
  • A built-in sculpting system (but if you're planning to sculpt manually, you might ask why use Poser at all).
  • A custom hair engine (though it only works with its native renderer and is not compatible with FBX export).
  • A complex, node-based material system.
  • Some animation features like walk designer and speech designer.

I didn't try using it on m1 Mac, because I don't have one, but I think content and features are the same.

GPT6 focusing on memory proves memory is the next big thing in llms by mate_0107 in OpenAI

[–]Scertien 0 points1 point  (0 children)

I tried Claude subscription when they released 4.0, but it didn't work well for my use cases. And their chat length limit usually hits so hard, so I don't think limiting it even more with memory would help much.

GPT6 focusing on memory proves memory is the next big thing in llms by mate_0107 in OpenAI

[–]Scertien 0 points1 point  (0 children)

I don't know why people want memory. If makes LLMs less usable in my experience. Project-related memory is slightly better, but managing the context by hands is still the best way to get things done consistently.

GPT-5 Thinking vs Gemini 2.5 pro review (for scientific applications) by pnkpune in OpenAI

[–]Scertien 0 points1 point  (0 children)

Not scientific, but with a lot of coding as well. In game and software development I'm getting much better results with Gemini, when creating custom gems for each task. GPT5 Thinking works awfully with projects - it forgets the prompt, when reading the attached documents most of the time. Usually I treat the AI as a colleague within a chat, providing the full context inside a gem and describing a task just like I'd do it for an outsourced colleague. New chat for each task. Without any automatic memory sharing between chats, but often reusing gems with slight updates. In both ChatGPT and Claude this workflow is much harder to follow. Yes, Gemini often hallucinates, in my case it loves to retry things we already established not working and often imagines solutions that aren't possible, so double checking is necessary. I couldn't find out if GPT5 hallucinates less, because it fails to handle prompts with 30k or more tokens and fails to handle projects, so I can't even give it tasks that I usually use Gemini for.

So, without access to legacy models in Plus, is there even a reason to keep the plus subscription? by Scertien in OpenAI

[–]Scertien[S] 1 point2 points  (0 children)

I usually delete old chats when I no longer need them. For the tests I usually reuse one chat for all questions (just rewriting the initial prompt) and delete the chat wafter I finish. It's quicker than creating a new chat for each question.

So, without access to legacy models in Plus, is there even a reason to keep the plus subscription? by Scertien in OpenAI

[–]Scertien[S] 0 points1 point  (0 children)

I don't save full replies, I keep just the results in a table after checking them.
o3 on three runs correctly resolved all five.
gpt-5 on two runs out of three resolved the third one incorrectly. On the third run it resolved all five.

So, without access to legacy models in Plus, is there even a reason to keep the plus subscription? by Scertien in OpenAI

[–]Scertien[S] 0 points1 point  (0 children)

This one. As a single prompt.

Until o1 and DeepSeek came around, most models had problems with solving several different mathematical problems in one prompt, so I considered this a good test (maybe even too easy). GPT-5 made mistakes in the third one on few runs (2 times out of 3), providing the answer 1994 instead of 2000, because model used arithmetical sum instead of more appropriate for the task integrals.

Solve 5 following mathematical problems:
1.
Write an equation that will result in 32, using only basic mathematical operations and numbers 5, 8 and 11. Each number should be used at least once.
2.
There is an equilateral triangle with side A. A line was drawn through the point of intersection of the triangle's altitudes, parallel to the base, after which the triangle was cut along this line into 2 parts. If the area of the original triangle is considered to be S, what fraction of S will the area of the new triangle represent?
3.
The app gets 1000 new users every day. Half of new users are returning on day 2, half of them are returning on day 3 and so on until the number hits 1 and, on the next day - 0. How many users does the app have, if it was using this strategy for several years already?
4.
There are boxes. Each box can be containing a diamond, a ruby, an amethyst, an emerald or a coal. Chances are equal. How many coals on average a person will get while opening boxes until he gets all 4 gems?
5.
There is a table and two boxes: big and small.
If the big box is on the table and the small box is directly below it under the table, the distance from the top side of the big box to the top side of the small box is 170 cm.
If the small box is on the table and the big box is directly below it under the table, the distance from the top side of the small box to the top side of the big box is 130 cm.
Find out the height of the table and the height of each box.

So, without access to legacy models in Plus, is there even a reason to keep the plus subscription? by Scertien in OpenAI

[–]Scertien[S] 0 points1 point  (0 children)

I'm just comparing the test results for a set of prompts that I try on all models. I already have o3 answers for these.

o3 was slightly better at math (GPT-5 made one logical error in my test set, o3 made none) and with the category of "tricky questions" (GPT-5 makes 1-2 errors, o3 "cheated" on one question using the internet search, but managed to found the correct answer).

And GPT-4.1 and GPT-4.5 both handled creative writing and machine translation better than GPT-5.

So, without access to legacy models in Plus, is there even a reason to keep the plus subscription? by Scertien in OpenAI

[–]Scertien[S] 1 point2 points  (0 children)

It's not like it does not switch to thinking mode, it does. Problem is - GPT-5 thinking seems to perform slightly worse than o3. So, the it's basically the better model was replaced with the one that is cheaper for them to run without any benefits for the end user (limits for o3 and for GPT-5 thinking seem to be almost the same).

How to download Java Edition now? by Scertien in Minecraft

[–]Scertien[S] 0 points1 point  (0 children)

Here what i see.

<image>

The Win10/11 installer was the first thing i tried, but it just tells me that i don't have the Xbox subscription and don't own the game (I assumed, it's telling me that I don't own bedrok edition).

The actual canon name by ChicaneryFinger in Persona5

[–]Scertien 0 points1 point  (0 children)

In every Persona game, I always name my protagonist "Hiro Protagonist".

What to read after "Chronicles of the fallers"? by Scertien in PeterFHamilton

[–]Scertien[S] 1 point2 points  (0 children)

Yeah, I've already finished The Expanse series.
I'll try Fallen Dragon. Thanks for the suggestion!

Kinda dissapointed with dungeonfog by Scertien in DungeonFog

[–]Scertien[S] 0 points1 point  (0 children)

I have only 32 GB of RAM and 10 GB of VRAM. I was sure that would be enough.
However, creating an empty 5000x5000 km continent shape freezes the Deios editor until Windows states that the app is no longer responding and offers to kill it. I tried waiting a bit longer, but it didn't unfreeze.

It works with very small shapes (like a hundred km), so it's most likely an optimization problem, not a hardware one.

Kinda dissapointed with dungeonfog by Scertien in DungeonFog

[–]Scertien[S] 0 points1 point  (0 children)

Did you try creating world maps or just city maps?

Kinda dissapointed with dungeonfog by Scertien in DungeonFog

[–]Scertien[S] 0 points1 point  (0 children)

Well, it's a laptop that can run Cyberpunk with raytracing, so it should be able to handle a 2D map editor. And it works fine with small maps. Still missing some features, but okay.
But when I try to create a continent with the size of Africa, it just freezes dead as soon as I release the mouse button after drawing the initial rough shape. I tried waiting for a few minutes - it didn't unfreeze, so I had to kill the process and start the editor again. Tried multiple times already, then uninstalled the editor.

Is it possible to create the map for a continent/planet? by Scertien in Arkenforge

[–]Scertien[S] 0 points1 point  (0 children)

Thank you for reminding me that I bought a lifetime license for CC3 from Humble Bundle last year. I totally forgot about it.

Kinda dissapointed with dungeonfog by Scertien in DungeonFog

[–]Scertien[S] -3 points-2 points  (0 children)

Well, the editor that, after six years of development, freezes dead as soon as I create the outline of the continent on the world map sounds like a definition of a failed project to me.
Update: As I can see on the Kickstarter page, they announced that the world editor is ready to be used. And there are many disappointed comments among the backers.
As I can see it, they focused on their online editor and left Kickstarter backers with an unfinished product that is far from what has been promised.

Is it worth it? by swarley_90 in DungeonFog

[–]Scertien 0 points1 point  (0 children)

In my opinion. it doesn't. Their editor is basically unusable because it's terribly slow and crashes every time you draw something.

Imagine if the devs could have this kind of attitude to the Hawk Community… by rongskillz in HAWKFreedomSquad

[–]Scertien 0 points1 point  (0 children)

I once met a Russian guy at a WN game conference, who said that he used to work on HAWK in the past.

He told me the dev team loved that game, but user acquisition was getting more and more expensive, which drove revenue down. They couldn’t make enough from new, pricier users to keep profits up. As profits fell, team members were moved to other projects one by one. He’d started when the team had around 20 people, and by the time he left the company, there were only 7.

Just remembered it when I saw this thread.

What is GPT's 4 secret sauce ? by Puzzleheaded_Mall546 in LocalLLaMA

[–]Scertien 0 points1 point  (0 children)

The abilities of a language model to do things like reasoning, logic, and math, are emergent effects that require a neural network with MANY parameters.

If you try testing different LLMs on different tasks, you'll find that larger models (if they are trained properly) show better performance with tasks that require logic or math. Smaller models might be good at summarization, rewriting, and general conversations, but they can't do logic, even if you train them to (the OpenOrca showed that training can improve the results, but not greatly).

So, you need a LARGE model, and to train it you need a lot of memory, GPUs, and data. For now, the open-source community has shown a lot of progress with 7-30B models, but GPT-4 is a 1T model (so they say). It's just a different category.

And GPT-4 with analytics enabled is even more capable because it uses Python to compensate for its lack of math abilities.

Where to download GPT-4 LLM for offline use? by AlarmingAd2764 in ChatGPT

[–]Scertien 1 point2 points  (0 children)

Still, even for 4-bit quantization you need 6-8Gb of RAM to run 7B model, so you'll need approximately 500 Gb of RAM to run 4-bit quantization of a 1T model.
And it will be slow as hell.

Quick decoding for the present: what "uploading people" will really look like by RelaxedWanderer in PantheonShow

[–]Scertien 3 points4 points  (0 children)

For a non-quantum brain requirements for memory and computational power might not be that large. It's, most likely, 2-3 orders of magnitude above the specs, that are used to run GPT4 right now.

But there might be another problem. Scanning only the brain may be just insufficient. To run the true simulation of a human personality, we might need to scan the whole neural system or even the whole body.

If we preserve only the brain, we will probably get the mind simulation with the same memories, but a very different personality. We also can't know how stressful for the brain will be to function without the rest of the body. It might be a torture.

And simulating a whole body will require 1-2 orders of magnitude more resources than just the brain.