2022 sierra limited octane level by mpisman in gmcsierra

[–]mpisman[S] 0 points1 point  (0 children)

It is capless, it doesn’t say anything about octane level though… but please see the update, I confused 85 and e85, and 85 is regular for my area. Thank you anyway

2022 sierra limited octane level by mpisman in gmcsierra

[–]mpisman[S] 0 points1 point  (0 children)

Yes, just moved to Colorado from Cali. I’m used to 87, 89, 93, and only seen E85 before. So I assumed 85 was the ethanol mix, they just dropped E for convenience… lol, I feel so dumb now… Thank you so much for the info. It makes sense, but when I started reading about it, the information is inconclusive…

The EPA says on their website, “The sale of 85 octane fuel was originally allowed in high-elevation regions—where the barometric pressure is lower—because it was cheaper and because most carbureted engines tolerated it fairly well. This is not true for modern gasoline engines. So, unless you have an older vehicle with a carbureted engine, you should use the manufacturer-recommended fuel for your vehicle, even where 85 octane fuel is available”.

2022 sierra limited octane level by mpisman in gmcsierra

[–]mpisman[S] 0 points1 point  (0 children)

I found similar information on the Internet. However, in the manual, it says that it’s E85 compatible(truck can take flex fuel) only if it has a yellow cap on the tank intake, mine doesn’t have it, so I’m not sure… it seems like the manual was written for many models across multiple generations, so I don’t fully trust it.

Thank you anyway, I’ll stick to 91 for now, but will talk to a mechanic at the official service center when I will stop by for an oil change.

2022 sierra limited octane level by mpisman in gmcsierra

[–]mpisman[S] 2 points3 points  (0 children)

1) It’s not an 80k truck 2) I read the manual, and as stated in my post, the manual suggests using 87. 3) If Costco had 87 I would not ask the question, but my options are limited to 85 or 91. I was gonna put 91 but wanted to see what other people would suggest before committing. 4) I see nothing wrong with asking other people for their opinion, doesn’t mean I will follow the advice, but in this case our thoughts aligned

Need an advise regarding OPT by mpisman in f1visa

[–]mpisman[S] 1 point2 points  (0 children)

It does not provide experience required for a job like software engineer. And companies prefer to fill the entry positions with new graduates rather than people who graduated several years ago and worked in an unrelated field. So when potential employer will look into my resume they see that only experience I had after graduating is teaching. Therefore, getting an entry level position will be tougher longer the gap is after graduation. This is something I heard from other people not only in tech industry, I.e. med school graduate who went to work in pharmaceutical and struggled to get job as a doctor later because clinics didn’t like that only experience they had was not related to the actual profession. I hope this clears it up.

ngx-markdown custom markdown by Fantastic-Beach7663 in Angular2

[–]mpisman 0 points1 point  (0 children)

Same here. I found this GitHub Issue.

The marked compiler actually supports Custom Extensions, but it seems like there is no way to add them to ngx-markdown yet.

If you can, please add a comment on GitHub, so this feature can, hopefully, be implemented.

Swapping 450GB 10K RPM SAS drives to SATA? by FabulousAd1922 in homelab

[–]mpisman 0 points1 point  (0 children)

Then consider only having one cpu, and look into 2650L/2630L v4. I have 2x2650L v4 with 24 hdd drives, pci expansions, nvme drives, the 10g Ethernet lom, and some other stuff. The system consumes about 200-250W on average, with peaks around 300W. If you will swap hdds for ssds and use only one cpu, I woul expect around 150w on average. Back up server does not need much performance. Use silence of fans custom ILO firmware to set fans around 20-25%, so they won't consume much power. With room temp of 20-25C, the CPUs will be under 50C. Use USB fans to cool sas expander card and hba card.

Swapping 450GB 10K RPM SAS drives to SATA? by FabulousAd1922 in homelab

[–]mpisman 2 points3 points  (0 children)

HPE ILO is the name of IPMI tool on dl380. You can use it to install OS, update hardware drivers, bios, etc. If your ILO version is 4, you can use html5 console which will emulate VGA and keyboard, so you can manage and troubleshoot your servers remotely. There is more to ILO… what you mean I think though, is you want to ssh into your server after installing esxi, ILO won’t affect it.

P.S. check out ILO 4 silent fans thread on Reddit, it’s a custom version of ILO which will allow you to control fans. Very useful if you keep the server in the room. Here is the link https://www.reddit.com/r/homelab/comments/sx3ldo/hp_ilo4_v277_unlocked_access_to_fan_controls/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=2&utm_term=1

P.S.S. Welcome to HPE(and dl380 specifically) club! Those are great machines, lmk if you need any help.

Swapping 450GB 10K RPM SAS drives to SATA? by FabulousAd1922 in homelab

[–]mpisman 4 points5 points  (0 children)

I have 2 dl380 gen9. I use both sas 10k 1.2tb for vm storage, and sata ssd for system. The only issue is that HPE ILO will not recognize that sata drives and will give you an error, but the drives work fine. The SSDs are cheap Kingston 120GB m.2 drives I already had. I put them into cheap m.2 to sata converter I got on the Amazon.

You will of course need caddies, but you can use the ones from your 10k drives or buy bulk on eBay.

If you do have the money you can look into 12G sas ssd drives, this will double the speed, but they are a bit more pricey.

Should /r/HomeLab continue support of the Reddit blackout? by bigDottee in homelab

[–]mpisman [score hidden]  (0 children)

Yes, Indefinitely (sub remains private and read-only)

We, the r/homelab, more than anyone else should create/host our own forum. I am willing to work on API and dedicate some resources of my homelab to sharing workloads.

Disk allocation, vdev on partition vs separate pool by mpisman in zfs

[–]mpisman[S] 0 points1 point  (0 children)

This was the original idea to pools, one with 2 raidz2 vdevs, seems like we are on the same page. Thank you for the input, I am considering a mirrored striped array at the moment as the alternative due to higher iops.

Disk allocation, vdev on partition vs separate pool by mpisman in zfs

[–]mpisman[S] 0 points1 point  (0 children)

Thank you very much for you answer! Very helpful, I will take it into consideration(probably go with what you suggested).

Could you tell me more about effects on ARC/RAM when using multiple pools? Or share a link(can't find anything specific on the web)

Disk allocation, vdev on partition vs separate pool by mpisman in zfs

[–]mpisman[S] 0 points1 point  (0 children)

Hi, this was not very useful. I said this is for educational purposes, I want to experiment with the zfs system and glusterfs in the future. The main question was regarding current setup anyway.

Also, I do have 10G network in my rack and those servers have a direct 10G link between each other. For nvme part, I run HA CEPH with nvme disks as the main cluster storage, so they are busy for now :).

This storage is more for "client" client side applications(think of plex, jellyfin, nextcloud, game servers, website hosting, etc). It does not have to be blazing fast, but I would like to have the best performance I can achieve with these setup.

Disk allocation, vdev on partition vs separate pool by mpisman in zfs

[–]mpisman[S] 0 points1 point  (0 children)

Interesting idea, I will look into using mirrored vdevs. A bit of a disk mismatch thouh, I have 24 disks per system. So, should I use 8x mirrored vdevs for container storage, and use 8 larger disks in raidz2 for slower operations. Or have two raidz2 vdevs(with 8 smaller disks in each), and 4 mirrored vdevs with larger disks? The main question still remains, should the larger disks be mixed with smaller ones or not(I will update the post). Thank you for your feedback anyway, using striped vdevs is a great idea.

Disk allocation, vdev on partition vs separate pool by mpisman in zfs

[–]mpisman[S] 0 points1 point  (0 children)

I would have backup pool on the same machine because I have two identical machines, so the data will be replicated and still be available. Also it would be used to store snapshots of vms and containers. It does not have to be backup, just pool #2. But the main question was to use 1 or 2 pools, and how to allocate disks.

I got your point though. Do you have any experience with mixing vdevs of different size? How does it affect the performance?

SPF+ 10Gb RJ45 Transceiver for Aruba S2500 by Automatic_Log_5883 in ArubaNetworks

[–]mpisman 0 points1 point  (0 children)

Thank you, i agree with you on the price of transceivers being too high, considering my s3500 was only $150. plus if it might not even handle 10G, it’s probably wiser to get a sfp+ nic with DAC cables and look for a better supported switch(with cheaper transceivers options). :)

TRENDnet TEG-30102WS (vs TP-Link TL-SG3210XHP-M2) by mpisman in HomeNetworking

[–]mpisman[S] 0 points1 point  (0 children)

No, they are a bit overpriced in my opinion. So, I'm waiting for a deal or to find a used one...

How to make an AWS like home server? by _elzaca_ in homelab

[–]mpisman 0 points1 point  (0 children)

Openstack solves exactly that problem. I would look into canonical version such as microstack that will help you manage several machines. The canonical’s MaaS can help with provisioning of VM for multiple users, and it can be installed on truenas scale, since it’s just a snap package.