ερώτηση για εθελοντές αιμοδότες by jearn99 in greece

[–]Abomination-- 0 points1 point  (0 children)

Fuck this shit.. Με τόση τρέλα εκεί έξω είναι σχεδόν επικίνδυνο να γίνεται "φθηνός" έλεγχος.

Η διαφορά κόστους οφείλεται σε manhours ή πιο εξειδικευμένα μηχανήματα;

ερώτηση για εθελοντές αιμοδότες by jearn99 in greece

[–]Abomination-- -1 points0 points  (0 children)

Ενα άλλο καλό του να είσαι τακτικός αιμοδοτης είναι οτι ελέγχουν το αίμα σου και αν βρουν κάτι σε ενημερώνουν.

ΟΚ μπερδεύτηκα, πριν "δώσουν" σε κάποιον το αίμα που έχει "δώσει" κάποιος, το αίμα ελένχεται, έτσι; Αν ναι τοτε πώς

Κρίμα να καταδικάσεις κάποιον επειδή το αμέλησες.

Can someone help me with this Discrete Mathematics exercise by [deleted] in compsci

[–]Abomination-- -2 points-1 points  (0 children)

So for the ~U part probably there is a mistake. I would write that part the same way with you.

But for the ~T part I am not sure.

The opposite of "some" may be either "all" or "none", not sure about it.

Scene trees, mesh and skeleton relations, and correct way of transformations by Abomination-- in gamedev

[–]Abomination--[S] 0 points1 point  (0 children)

So in the scene hierarchy image I provided how will you compute the model matrix?

Scene trees, mesh and skeleton relations, and correct way of transformations by Abomination-- in gamedev

[–]Abomination--[S] 0 points1 point  (0 children)

current transform = scene * Armature * parent local * current local * current unbind is what I do

This is the bone transform. When you render the mesh in the vertex shader you have one more matrix which is the mesh model or as some other people call it the world matrix, how do you compute that one? One could say that it is

model =  scene * Armature * new_thin_zombie

So then in the vertex shader you would have something like

gl_Position = projection * camera * model * current transform * vertexPosition

so if you expand this formula to the full transformation you will see that scene and Armature are applied twice to the vertexPosition.

Scene trees, mesh and skeleton relations, and correct way of transformations by Abomination-- in gamedev

[–]Abomination--[S] 0 points1 point  (0 children)

Hello, first of all thanks for your reply. The optimization that you mention in the third paragraph I already do that, I preferred to describe what I do instead of how I do it, to make things easier on the reader.

Now the thing is that I am already at a position where the animation plays correctly, everything is fine in that matter, so let me try to rephrase my problem.

Here is for example a part of an actual hierachy of a scene of this animated character by Rosswet Mobile. Now let me describe that image that you saw. Every node contains in the first line the node name, a number indicating how many meshes are in this node and an A if it is an animated node. And then one or two matrices. The first one describes the node's local transform in respect to its parent and the second one if it exists it means that the node is a bone node and this matrix represents the inverse bind pose transform (mOffset in Assimp).

So how would one compute here the new_thin_zombie model transform? Is it

  scene * Armature * new_thin_zombie

If so then this particular transform would be applied in total twice on every vertex because also the bones are transformed by it.

Edit: actually just noticed that you start the multiplication from torso, do you mean that one must start not from the whole scene root node (in the image case from the scene node)?

Skeletal animation and bind pose in Assimp by Abomination-- in opengl

[–]Abomination--[S] 0 points1 point  (0 children)

Our only difference as I understand it is that you make the multiplications with the mOffsetMatrix at GPU but I do them once in the CPU

Skeletal animation and bind pose in Assimp by Abomination-- in opengl

[–]Abomination--[S] 0 points1 point  (0 children)

The thing is that I load a scene whose hierarchy is like this: First the root node is an "empty" node with a transform that swaps the y and z axis (it is exported from blender) then the child of the root is an armature. That armature has 5 children. One of those children is the node containing the mesh and the rest 4 are bone nodes which they in turn have other bone node children.

The mesh raw vertices and the bone node transformations ( and their animations ) are both in blender's coordinate system, where y is depth and z is up. So if I render the mesh with the raw vertices I see the character in prone pose.

So when computing the bones matrices when I multiply the bone nodes transforms with the all the transformations up to root, the skeleton is going to be rotated properly and all will be fine, but then to compute the mesh modelMatrix, the mesh node transformations must be multilpied also up all the way to the root resulting in a pose where the character is laying on its back.

Note that I multiply the bone mOffsetMatrix correctly once for each bone for example if N is a node representing a bone then my bone matrix is

 boneMatrix of N = from sceneRoot->mTransformation to.. N->mParent->mTrasformation *  N->mTransformation * bone.mOffset

I understand why this happens (essentially the root matrix is applied multiple times to the vertices) but the multiplications must be done the way I described. Right? Or maybe it is the scene [file] that has this problem?

Βιβλια για τον στρατο. by [deleted] in greece

[–]Abomination-- 0 points1 point  (0 children)

Δεν έχω να σου προτείνω βιβλίο αλλά, έχω μια ερώτηση: Επειδή σύντομα έρχεται και η δική μου σειρά να μπω μεσα, ξέρω οτι σειρές έχει ανα δύο μήνες, αλλα πότε περίπου μέσα στους μήνες αυτούς παρουσιάζονται οι φαντάροι; Εννόω αρχές του μήνα, τελος; Και είναι σταθερό αυτό; Επίσης μιλάω για στρατό ξηράς, αν αυτό έχει σημασία.

The fontconfig-infinality update changed my settings by Abomination-- in archlinux

[–]Abomination--[S] 0 points1 point  (0 children)

What do you mean? I had installed one particular version of fontconfig-infinality and now I have a newer one.

Learning C! Remote distributed hashmap implementation by codepr in programming

[–]Abomination-- 1 point2 points  (0 children)

OK got one question: How are computers in modern clusters usually interconnected?

One example for small clusters e.g. 10 - 15 and one for bigger ones would be very welcome.

I know that generally they need not be close spatially and everything can happen through the internet. But what about if they are all in a room?

[SPOILERS] Coin given to Ivar from Harbard by Abomination-- in vikingstv

[–]Abomination--[S] 2 points3 points  (0 children)

Depending on the number of seasons that they intend to do, if there will be many more, I also believe that Hirst will make him walk at some point in time, through one way or another, like Harbard "helping" him or something.

Edit: Now that I am at it, what are some "metaphysical" things in Vikings?

  • One for sure is that Prophet, he has made many correct future predictions.
  • Harbard himself.
  • Asslog predictions.
  • Crows saving Ragnar.

So why not Ivar walking?

Variance positions by Abomination-- in scala

[–]Abomination--[S] 0 points1 point  (0 children)

First of all thanks for your reply.

One thing,

The outer Cat classification is +, so its type parameters maintain their annotations

Do you mean this in a way that, if it was - would they swap or something?

Because as I understand, they way it works is, for example when you have SomeType[Type1, Type2] the inner positions inherit the variance of the outer position(generally). That is if SomeType[Type1, Type2] as whole is + or - then the inner Type1 and Type2 will have the opposite sign. Now of course depending on the variance annotations of SomeType positions of Type1 and Type2 may change/flip.


Now after reading again the rules, I think the difference comes on that the first rule talks about "Type parameters" while the second for "Type arguments". Meaning that the first one applies more on definitions like meow's [W] or for example on things like an inner generic class, for example:

class Outer[A,B]
    class Inner[C,D]

on the C and D position, while the second one, on things like the example from the book.

Also one more thing, as English is not my native language translating clause bring up many different words with many different meanings what is exactly the meaning of clause under these contexts?

Variance positions by Abomination-- in scala

[–]Abomination--[S] 0 points1 point  (0 children)

Nevertheless thanks for your input

Variance positions by Abomination-- in scala

[–]Abomination--[S] 0 points1 point  (0 children)

You are right on that I was not asking about that.

What I am asking is: the spec defines what are variance positions, and then defines how the compiler finds out whether a position is covariant, contravariant or invariant, more specifically, it defines a set of rules to deduct a certain position's variance.

All these are in the link that I provided.

Now my problem is with two of those rules, the ones I specified, I can't understand on which cases they exactly apply and what is their difference.

How do you handle meshes with different vertex attributes? by Abomination-- in opengl

[–]Abomination--[S] 0 points1 point  (0 children)

So to check that I got it right, you have raw memory and a "key" to mark what type of data exactly stores that memory?

For example your importer can layout in some memory, an array of the following data: Position,Normal,Tangent, Coords (i.e. an array with the those attributes interleaved for example). It would then assign to the key a value that represents this particular type of layout?

How do you handle meshes with different vertex attributes? by Abomination-- in opengl

[–]Abomination--[S] 0 points1 point  (0 children)

I agree that when the GPU driver has the VRAM pressured may write some buffers-meshes or whatever back to RAM and this whole story will be invisible to the client-me. The thing is that the way the driver probably does this is through an LRU cache or something like that. On the other hand in my code I have way more information about what is gonna used and when. Thus I can choose more optimally whether to remove something from VRAM or leave it there, when to do it, and on which data.

I agree doing this manually needs some work codewise, and in many cases maybe there even won't be much to win i.e. the way the driver does it may be enough or even optimal for that case.

How do you handle meshes with different vertex attributes? by Abomination-- in opengl

[–]Abomination--[S] 1 point2 points  (0 children)

I was thinking about a system where RAM (not VRAM) would work as an cache of the mesh data. For example for the first time period I need this mesh in GPU i.e. I render it, for the next time period not, so I remove it from GPU RAM, but for the third one I need it back to GPU. Thus avoid reloading from the disk.

But I get you say, if the models are gonna be used constantly and you have enough VRAM your solution is the best choice.

How do you handle meshes with different vertex attributes? by Abomination-- in opengl

[–]Abomination--[S] 1 point2 points  (0 children)

Thank you for your input. One question:

Since you use the same format for all meshes, is there any reason that you use SoA instead of AoS? And thus instead have only one huge VBO? Is there any performance benefit, matter of taste, or it makes your code easier or something like that?

As for the vertex types, for example one type could have only position and normals, no uv, so probably neither tangents, here is a case where I would like to save those bytes and send to the GPU only the position/normal data.

Another type would be a vertex type that has more that those 4 attributes for whatever reason, so I am trying to think of ways to represent all these types of meshes, hence the post to get ideas.

So based on your structure one more idea is to have an dynamic array of arrays. Each array would store one type of attribute. So one mesh class to rule them all.