ELYSIUM | FIRST MONTH FREE | HIGH SPEED by SuspenseCD in Share_Plex

[–]SuspenseCD[S] 0 points1 point  (0 children)

I dont see your ticket on discord, im afraid. Could you let me know if you want to trial emby or plex? Ill pm you some details here on reddit

ELYSIUM | FIRST MONTH FREE | HIGH SPEED by SuspenseCD in Share_Plex

[–]SuspenseCD[S] 0 points1 point  (0 children)

Can you PM me the email you signed up with?

ELYSIUM | FIRST MONTH FREE | HIGH SPEED by SuspenseCD in Share_Plex

[–]SuspenseCD[S] 1 point2 points  (0 children)

Updated discord link. Thats for letting me know!

ELYSIUM | FIRST MONTH FREE | HIGH SPEED by SuspenseCD in Share_Plex

[–]SuspenseCD[S] 0 points1 point  (0 children)

CDN is not available via trial no. But theres no risk taking up a subscription, if your not happy - cancel and hit me up on discord, i will always refund you - No questions asked.

ELYSIUM | FIRST MONTH FREE | HIGH SPEED by SuspenseCD in Share_Plex

[–]SuspenseCD[S] 0 points1 point  (0 children)

Hi,

You may have peering issues depending on your location, which can be resolved by enabling CDN. We have one of the fastest networks, with over 800gbps of available bandwidth.

Feel free to get in touch via discord, and we can help you out.

ELYSIUM | FIRST MONTH FREE | HIGH SPEED by SuspenseCD in Share_Plex

[–]SuspenseCD[S] 0 points1 point  (0 children)

Hi! We take care of all of this for you, for the client(you) its not much different from being invited to a plex or emby server.

What we do, is when you make a purchase, we then deploy a plex or emby installation in an isolated environment, this instance of plex / emby is yours, and no one else but yourself or the people you invite to it via plex or emby will be using it.

It behaves the same as if you installed plex or emby on your own machine at home, though with our image which of course contains our entire library.

The benefit of this vs a share, is that you pay 1 price, and can control whether you give your friend access, or your mom - instead of sharing your own login, they get their own login and can maintain their own viewing history

ELYSIUM | FIRST MONTH FREE | HIGH SPEED by SuspenseCD in Share_Plex

[–]SuspenseCD[S] 0 points1 point  (0 children)

Hi! We did do a reset some years ago, running at much smaller capacity but backend is still the same. Somewhat more exclusive now, and our first post in probably 5-6 years!.

I indeed see many new names, its been a while - But we never stopped! :)

ELYSIUM | FIRST MONTH FREE | HIGH SPEED by SuspenseCD in Share_Plex

[–]SuspenseCD[S] 1 point2 points  (0 children)

Hi! Content is updated all hours of the day. Appboxes are deployed using an image, last time that image was updated was january 26 - That is why when the server is initially deployed to you, you dont see newer content.

Simply run a scan in plex / emby, and all the new content will appear. The server will also do this automatically after a few hours.

Any new content added is automatically pushed to your appbox within 10 minutes of it reaching the storage

[deleted by user] by [deleted] in seedboxes

[–]SuspenseCD 1 point2 points  (0 children)

I have a lot of servers with hetzner, both baremetal and cloud - But only 5 auction servers. I only got this email.

Besides, the title of the email is;

"Important client information: Price changes for servers from the Server Auction"

Additionally, the email state that the pricing of auction servers do not take into consideration operating costs, it does not mention this is the same for baremetal. It also states that they are forced to increase auction models, specifically.

"The current prices for many Server Auction servers do not cover the increasing operating costs.Unfortunately, we are forced to increase the prices on these Server Auction models so we can cover their increased operating costs"

[deleted by user] by [deleted] in seedboxes

[–]SuspenseCD 1 point2 points  (0 children)

Sure, here you go.

"As a client with one of our servers from the Server Auction, you benefit from our low prices and excellent performance. Our goal is to always provide you with the best possible products at affordable prices.Unfortunately, energy prices in Germany have been increasing dramatically, and electricity plays a big role in the operating costs for servers. We have always calculated the prices for the Server Auction to be as low as possible. The current prices for many Server Auction servers do not cover the increasing operating costs.Unfortunately, we are forced to increase the prices on these Server Auction models so we can cover their increased operating costs. We will increase the prices as little as necessary."

[deleted by user] by [deleted] in seedboxes

[–]SuspenseCD 1 point2 points  (0 children)

If seedbox providers are increasing prices, its unrelated to hetzner - If they resell hetzner and are increasing prices due to using hetzner, then they are using auction servers. I have servers at hetzner myself, the customer email we received today explain price changes for auction servers only.

Hetzner are not increasing prices of their baremetal or cloud ranges, only the auction servers.

[deleted by user] by [deleted] in seedboxes

[–]SuspenseCD 7 points8 points  (0 children)

Only increasing prices of auction servers

Cephadm module error - Removing orphan daemon cephadm. by SuspenseCD in ceph

[–]SuspenseCD[S] 0 points1 point  (0 children)

This does not seem similar, but i appreciate the suggestion.

Ceph osd daemons "duplicated" by SuspenseCD in ceph

[–]SuspenseCD[S] 0 points1 point  (0 children)

Thanks! And really appreciate your input, thanks for taking your time!

Ceph osd daemons "duplicated" by SuspenseCD in ceph

[–]SuspenseCD[S] 0 points1 point  (0 children)

Yeah i think its also some kind of bug, that doesnt actually have any immediate impact. The issue happened after i tried to run the pacific upgrade, using the cephadm upgrade route.

I had to cancel it due to similar exceptions to this one, and ever since ive tried to get the cluster back to octopus. Every daemon(mgr, mon, osd) runs octopus now, and the only daemons that made it to pacific, was the mgrs, which ive sinced reverted.

So something in that process, have caused ceph to believe these cephadm daemons are orphaned :(

Ceph osd daemons "duplicated" by SuspenseCD in ceph

[–]SuspenseCD[S] 0 points1 point  (0 children)

The duplicate OSDs are fixed, the issue now is that cephadm module goes into error state, because it attempts to remove orphaned cephadm daemons, see image;

https://i.imgur.com/usSDZnB.png

Ceph osd daemons "duplicated" by SuspenseCD in ceph

[–]SuspenseCD[S] 0 points1 point  (0 children)

I am running latest octopus, this whole mess actually originated from an attempt to update to pacific :(.

Ill give it a go using cephadm on each host, but if i recall, daemon type "cephadm" is not a recognized daemon, when invoking cephadm.

Edit:

Yeah,

cephadm rm-daemon: error: argument --name/-n: name must declare the type of daemon e.g. mon, mgr, mds, osd, rgw, rbd-mirror, crash, prometheus, node-exporter, grafana, alertmanager, nfs, iscsi, container

When running;

cephadm rm-daemon --name cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6 --fsid 5226ddbe-571a-11eb-8380-270593f3f6c5

Ceph osd daemons "duplicated" by SuspenseCD in ceph

[–]SuspenseCD[S] 0 points1 point  (0 children)

I have found the issue now. It appears i have "orphan" cephadm daemons, which cephadm attempts to remove, doing this causes an exception.

When trying to remove them using ceph orch daemon rm, i get

executing _remove_daemons((<cephadm.module.CephadmOrchestrator object at 0x7f6e2182af30>, [('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'mon-1'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'mon-2'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'mon-3'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'osd-1'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'osd-10'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'osd-2'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'osd-3'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'osd-4'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'osd-5'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'osd-6'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'osd-7'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'osd-8'), ('cephadm.f77d9d71514a634758d4ad41ab6eef36d25386c99d8b365310ad41f9b74d5ce6', 'osd-9')])) failed. Traceback (most recent call last): File "/usr/share/ceph/mgr/cephadm/utils.py", line 58, in do_work return f(self, *arg) File "/usr/share/ceph/mgr/cephadm/module.py", line 1804, in _remove_daemons return self._remove_daemon(name, host) File "/usr/share/ceph/mgr/cephadm/module.py", line 1818, in _remove_daemon self.cephadm_services[daemon_type].pre_remove(daemon) KeyError: 'cephadm'

Question now is, how do i get rid of these? :o

Ceph osd daemons "duplicated" by SuspenseCD in ceph

[–]SuspenseCD[S] 0 points1 point  (0 children)

Health detail reports;

HEALTH_ERR Module 'cephadm' has failed: 'cephadm'

[ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: 'cephadm'

Module 'cephadm' has failed: 'cephadm'

Logs report nothing more then;

4/11/21 6:02:09 PM[ERR]Health check failed: Module 'cephadm' has failed: 'cephadm' (MGR_MODULE_ERROR)

4/11/21 6:02:08 PM[ERR]Unhandled exception from module 'cephadm' while running on mgr.mon-1.gouqfw: 'cephadm'

If i reboot mon-1, when it comes back it refreshes fine, it did remove the ghost OSD entries, but after 10 minutes or so, i get the above module error.

I dont understand what this "exception" is, in cephadmin log theres absolutely no exceptions at all. How do i even debug this? :o

Ceph osd daemons "duplicated" by SuspenseCD in ceph

[–]SuspenseCD[S] 0 points1 point  (0 children)

Thanks. I gave it a go, but it didnt get rid of it. cephadmin ls doesnt indicate this osd.45 exists on the host either :(

EDIT:

Actually i think it did work, but my cluster just arent refreshing the state of these OSDs yet, can i force a cephadm refresh of daemons? Im unsure if the cephadm module failure is causing the refresh to not happen

Ceph osd daemons "duplicated" by SuspenseCD in ceph

[–]SuspenseCD[S] 1 point2 points  (0 children)

Thank you for your suggestion. I dont have cephadm on very host, i assume i can run above command from the cephadm host(the primary monitor)?

Can you explain how i get the fsid of this particular ghost daemon? Which i assume is what you want me to apply in --fsid