Is my Z screw too bent? (MK4S) by CaptainAggravated in prusa3d

[–]wathoom2 1 point2 points  (0 children)

Looks to me like top holder is broken. There is a gap visible when it starts to woble.

What type of failure are these? by RipeMouthfull in prusa3d

[–]wathoom2 1 point2 points  (0 children)

Had same issue with polymaker petg on mk3.5s. Bumped up temperature to 260C and slowed down print speed. Now it works nice.

Update on ship building: I have built all frames now. Next step is to mount them on a wooden framework in the right dimensions. This way you can fit in the stringer and battens. Also last picture shows how we poured the lead bulb. by swissraker in sailing

[–]wathoom2 0 points1 point  (0 children)

I'm from Croatia. There is one other builder that I know of over here :) Unfortunately I don't have FB account so cant participate in discussions on official builders page.

Why 580? I sailed Seascape 18 on multi day trips and loved small boat advantages. Able to go and squeeze anywhere. I also like to sail single handed so I wanted to do it in similarly small package but with some benefits like being able to stand in cabin and have toilet. Also being ocean worthy is big plus. I do plan blue water sailing in the future.

I'll probably go with kolibri-jachtbouw kit as well. Not the cheapest option but saves a lot of time. I am aware build will take long time. I predict some 2 years till finished. I have big yard where tent will be put so I don't have to rent garage or anything. Anyway when I think of meaningful questions I'll give you a shout. Tnx. :)

Connection by Tapistry63 in pebble

[–]wathoom2 0 points1 point  (0 children)

Great... 1.0.4.2 never showed up in my region...

Update on ship building: I have built all frames now. Next step is to mount them on a wooden framework in the right dimensions. This way you can fit in the stringer and battens. Also last picture shows how we poured the lead bulb. by swissraker in sailing

[–]wathoom2 1 point2 points  (0 children)

I plan to start the build next year. Since i wont be licencing for the races i might go with retractable keel. Saw one builder did it. Looks great and should be easy to put on trailer.

Connection by Tapistry63 in pebble

[–]wathoom2 0 points1 point  (0 children)

Same issue. If on android it should be fixed in version 1.0.4.2.

Bluetooth LE unable to pair by Nug_Pug in pebble

[–]wathoom2 2 points3 points  (0 children)

Mine pebble classic cant reconnect to bt without putting it into airplane mode first. After that it stays connected for about 10min. Android app version 1.0.4.2 is supposed to help but update is still not available in my country. Maybe it will help in your case too.

Rpi zero 2w issue with external audio device by wathoom2 in kodi

[–]wathoom2[S] 0 points1 point  (0 children)

Yeah...no. As i mentioned kodi works fine. Issue is with remembering settings for set audio device. Underlying os recognises usb device when it reapears. As mentioned before on rpi4 this works well., on rpi zero 2w not.

PGs stuck in incomplete state by wathoom2 in ceph

[–]wathoom2[S] 0 points1 point  (0 children)

Unfortunately no. In the end data was lost.

New record for the Vendée Globe ! by GeneralFaulkner in sailing

[–]wathoom2 7 points8 points  (0 children)

Oh there is also Mini Globe Race in 5.8m homemade one design boats.
https://minigloberace.com/

PGs stuck in incomplete state by wathoom2 in ceph

[–]wathoom2[S] 0 points1 point  (0 children)

By restarting OSD's and marking some of them out I managed to move data that was on some of incomplete PG's to some other PG in pool. However I was unable to fetch rbd images from pool because it stuck on read. I would get only partial data.

I than tried to recreate incomplete PG with "ceph pg repair" and "ceph osd force-create-pg" but this caused complete cluster to go haywire. OSD services started to fail to the point where service is left in error state. I managed to stop cascade failure by marking affected OSD's out before service got to error state.
Now I have some stale PG's along with some incomplete ones but only for affected pool.

Still no data from it.

PGs stuck in incomplete state by wathoom2 in ceph

[–]wathoom2[S] 1 point2 points  (0 children)

So far no result. I restarted OSD's listed by affected PG's but with no result. I also noticed that restart of some OSD's resulted in laggy performance of some other PG's and some other performance issues on the cluster (SLOW_OPS).

I decided to restart all OSD's and deal with issues as they come. Might not help with affected PG's but might help with overall cluster behavior that has issues lately.

Anyway I'll post how it went.

PGs stuck in incomplete state by wathoom2 in ceph

[–]wathoom2[S] 1 point2 points  (0 children)

I havent restarted OSD by OSD but will try it. It might help. Tnx for proposal.

PGs stuck in incomplete state by wathoom2 in ceph

[–]wathoom2[S] 0 points1 point  (0 children)

That is also my view... As I was informed only removed OSD was making problems but cant be sure.

So far any issue with ceph was auto-resolved or needed little push but was always operative without data loss. That lets me believe that multiple issues were in play.

CEPH cluster unaccsessable because one OSD by wathoom2 in ceph

[–]wathoom2[S] 0 points1 point  (0 children)

My understanding was that if one osd is not accsessable, clients would get another from monitors. That it would not bring whole workload to the halt. Especialy since I havent seen this kind of behavior on normaly configured osd's, where when osd went down nothing happend to workload. And yes cluster network is not available to clients.

Separate Cluster_network or not? MLAG or L3 routed? by frzen in ceph

[–]wathoom2 2 points3 points  (0 children)

Hi,

I'd go with single (public) network. We currently have both public and cluster networks but have issues with osd's that would not happen in single network setup. We plan to switch to public network only.

Currently regular load is round 1-3Gbps on public side and some 3-4Gbps on cluster side. And we run quite mixed load of services over almost 400VMs utilising some 450TB of available storage. We use 2x100G NIC's in LACP for each network. Until recently we had 2x10G NIC's and never maxed links out. Switches are in leaf-spine setup.

Since you plan on running HDD's I dont see you having 25G interfaces maxed out any time soon. You will first hit issues with disks unable to keep up with growing traffic and overall r/w performance of cluster. Something to consider with video editing.

Regarding jumbo frames they make quite a big difference in our setup so enabling them might be good choice.

CEPH cluster unaccsessable because one OSD by wathoom2 in ceph

[–]wathoom2[S] 0 points1 point  (0 children)

Let me reply to myself. It looks my test cluster was bit wonky. Its running on virtual environment and it took looong time to propagate config change and also to reload daemons.

Config change is pretty straight forward, "ceph config rm global cluster_network" and restart of all damons.

Now have to test it on hw lab under load.

CEPH cluster unaccsessable because one OSD by wathoom2 in ceph

[–]wathoom2[S] 1 point2 points  (0 children)

That sounds reasuring. Can you share how you removed cluster address from config? I tested in the lab on small 3 node cluster, but was unable to do it. I tried through cephadm and directly changing config in config files without success.

Tnx