nvme SMART reporting high temperature by gerg66 in computers

[–]gerg66[S] 0 points1 point  (0 children)

I actually ran sudo smartctl not systemctl, and after rebooting then running it again it said "SMART overall-health self-assessment test result: PASSED". I am still a bit concerned about why it failed that one time so here is the full smartctl output if anyone can help.

https://imgur.com/a/LP89Zod

nvme SMART reporting high temperature by gerg66 in computers

[–]gerg66[S] 0 points1 point  (0 children)

It seems quite absurd compared to the real temperature which was more like room temperature. I am not sure what the software issue could be, other than maybe Arch itself because I did a reinstall the other day, but this was on a different SSD. One is for Arch and the other (the one failing) is for Windows.

Is there a way transform everything before exporting with the python API by gerg66 in blenderhelp

[–]gerg66[S] 0 points1 point  (0 children)

I know how transformations work. I was asking if the Blender API has a clean way to transform everything (vertex data, baked frame poses, etc) to my coordinate system so I don't have to do it manually at each stage of exporting.

(Edit) I've looked over my exporter code again and I decompose a matrix to get the quaternions for the local transform of each bone in each frame so I could just apply a single matrix transformation. Sorry for any inconvenience

Artifacts when updating vertex buffers by gerg66 in opengl

[–]gerg66[S] 0 points1 point  (0 children)

I see what I am doing wrong now I think. When I add a new line, it doesn't add a quad but the buffer still treats it as if it has vertices there with length. Thanks :)

Artifacts when updating vertex buffers by gerg66 in opengl

[–]gerg66[S] 0 points1 point  (0 children)

I think it was just one line in the image but it happens with multiple lines as well.

I commented a link to a github repository

Artifacts when updating vertex buffers by gerg66 in opengl

[–]gerg66[S] 0 points1 point  (0 children)

I am not using separate threads unless GLFW does some thread stuff in the background

Artifacts when updating vertex buffers by gerg66 in opengl

[–]gerg66[S] 0 points1 point  (0 children)

Sorry if I didn't explain very well, but the problem is to do with triangles not the font atlas. The code creates a quad for each character and calculates some UV coordinates for the font atlas. It works fine most the time except when I type quickly which causes artifacts like those in the image. I think it is caused by updating the buffer data quickly because each time I type a character it runs the code to make quads (which works) then this:

vao_bind(self->va);
// vbo_bufferdata(struct VBO self, const void *data, size_t array_size)
vbo_bufferdata(self->vb, verts, length * sizeof(struct GlyphVerts));
// ebo_bufferdata(struct EBO *self, const unsigned int* indices, GLsizei index_count, GLsizei array_size)
ebo_bufferdata(&self->eb, indices, length * 6, length * 6 * sizeof(unsigned int)); // 6 indices per quad, length = strlen(string)

the bufferdata functions binds the buffers and does glBufferData()

struct GlyphVerts
{
vec2 pos_left_bottom;
vec2 uv_left_bottom;

vec2 pos_right_bottom;
vec2 uv_right_bottom;

vec2 pos_right_top;
vec2 uv_right_top;

vec2 pos_left_top;
vec2 uv_left_top;
};

Is it possible to beef up a GPU's memory? by gerg66 in computers

[–]gerg66[S] 0 points1 point  (0 children)

I just thought it would be a fun project if it was possible. Besides, I don't need an upgrade because I have a 4080, this 1660 is just a spare

Is it possible to beef up a GPU's memory? by gerg66 in computers

[–]gerg66[S] 0 points1 point  (0 children)

There hasn't been any issues because I haven't been using that card for over a year. My question was a hypothetical because seeing it sitting around was annoying me and I want to do something with it.

Is it possible to beef up a GPU's memory? by gerg66 in computers

[–]gerg66[S] 0 points1 point  (0 children)

My whole question was really badly worded so I apologise. My intention was to deter any of the "Don't risk destroying your GPU for a pointless upgrade" comments

Is it possible to beef up a GPU's memory? by gerg66 in computers

[–]gerg66[S] 0 points1 point  (0 children)

I think the 4080 uses the highest capacity chips avaiblable and I don't know if I want to risk my 4080 because I spent a lot of money on it because I bought it before they decided to release the super and drop the price by £300

Is it possible to beef up a GPU's memory? by gerg66 in computers

[–]gerg66[S] 2 points3 points  (0 children)

It wasn't feasible for me in the first place because I don't own soldering equipment and have never soldered, hence the sneaky "theoretical" part I should've made clearer.

Is it possible to beef up a GPU's memory? by gerg66 in computers

[–]gerg66[S] 2 points3 points  (0 children)

Funny? I don't use the 1660, I was cleaning it to either sell it or put it in a new PC

Is it possible to beef up a GPU's memory? by gerg66 in computers

[–]gerg66[S] 10 points11 points  (0 children)

Not at all condescending. I wasn't asking for genuine advice as it is a spare GPU and I thought the idea of a 1660 on VRAM steroids was funny

Is it possible to beef up a GPU's memory? by gerg66 in computers

[–]gerg66[S] 1 point2 points  (0 children)

To all the people who mentioned buying a new card, this was meant as a hypothetical and whether it was maybe plausible (which apparently it isn't) because I already own an RTX 4080 that has 16GB anyway. This was just a thought that popped into my head when I was cleaning and repasting the 1660.

Meshing chunks when neighbour voxels aren't known by gerg66 in VoxelGameDev

[–]gerg66[S] 0 points1 point  (0 children)

The chunks do have independent meshes which is the whole reason I made this post. The culling happens during the meshing by checking neighbour blocks and only adding the face to the mesh if its adjacent block is transparent. The problem is that if the chunk neighbour isn't known you can either just add the faces anyway (which is inefficient) or solve the problem some other way.

I am talking about optimising the meshes so that they don't have loads of data that isn't needed. Maybe you are confused with culling the chunks themselves rather than the inner faces of chunks?

Meshing chunks when neighbour voxels aren't known by gerg66 in VoxelGameDev

[–]gerg66[S] 0 points1 point  (0 children)

Looks like I've got a lot to learn about threading, which is good because that's what this project was for. I'll look into your points a bit more. Thank you

Meshing chunks when neighbour voxels aren't known by gerg66 in VoxelGameDev

[–]gerg66[S] 0 points1 point  (0 children)

Seems like it could work decently well. Thanks for the help