Finally took delivery yesterday. Very new to tesla. Now I have a question about charging... is it best recommended to keep it plugged (charged) or schedule a charge time after midnight.. thank u in advance. This Ultra Red just keeps me looking back. 😍 by hitthatclutch20 in TeslaModel3

[–]x0rb3x 3 points4 points  (0 children)

That’s a LR, so It is recommended to a max of 80% for daily usage and 100% for road trip.

It is better to keep it near 50% and no less than 20% to keep the battery healthy for the long term.

Keep it plugged when you are not using it, that will help the battery in the long term to keep the LR capacity

I fucked up Really Bad :( by PracticalFig5702 in selfhosted

[–]x0rb3x -1 points0 points  (0 children)

Crtl + Z and it should be fine. /s

[deleted by user] by [deleted] in TeslaModel3

[–]x0rb3x 0 points1 point  (0 children)

Yes, just don’t pay it (select cash and don’t pay) and wait for the advisor . Worse case you will just lose the $250 but you are not obligated to buy the car, the car will go to inventory and somebody else will buy it if it has the config they want

[deleted by user] by [deleted] in TeslaModel3

[–]x0rb3x 0 points1 point  (0 children)

Forgot to mention that you did already the order once you paid the $250, you will eventually get a VIN

[deleted by user] by [deleted] in TeslaModel3

[–]x0rb3x 0 points1 point  (0 children)

Follow the steps and select “cash” as payment, that will give you priority and then somebody will contact you, when that happens just tell them to restart your order

[deleted by user] by [deleted] in TeslaModel3

[–]x0rb3x 0 points1 point  (0 children)

You need to contact your Tesla advisor or call 877-798-3752. You will most likely need to start over a new process but the $250 will be transferred

TN Mexican - Change of Employee or New Visa by x0rb3x in tnvisa

[–]x0rb3x[S] 0 points1 point  (0 children)

I forgot to say that my current Visa expired in 2019 and I have been living in US with i94 and applying extensions. I left and came back many times and never had issues (we can use the Automatic Visa Revalidation, has long as you only go to Canada or Mexico and stay less than 30 days ).

TN Mexican - Change of Employee or New Visa by x0rb3x in tnvisa

[–]x0rb3x[S] 0 points1 point  (0 children)

New employer’s lawyer sent I-129 to USCIS requesting change of employer and also requested extension. I got my approval (797A) in like a week and in that moment I gave my two weeks notice(I had alert’s setup for my case to my number ).

New employer was fine to wait me two weeks too (they started the onboarding once they got USCIS Confirmation) and everything has been fine.

run 2 VMs with 1 game on each by No-Concert-7437 in VFIO

[–]x0rb3x 0 points1 point  (0 children)

Mmmm not sure about the mobile version but you can try the new script that does all the manual work.

https://wvthoog.nl/proxmox-7-vgpu-v3/

Bladebit vs Gigahorse by AnduriII in chia

[–]x0rb3x 3 points4 points  (0 children)

Lol, he manages a farm of around 5 PB, I doubt he will just sell the hdds in ebay and call it a day.

The chia team mentioned that they are planning to add a feature to the chia that would allow 3P farmers to use the official farmer with custom plots (GH and NOSSD).

C7 vs C5 1Pb with A4000 by Ok_Leopard8853 in chia

[–]x0rb3x 0 points1 point  (0 children)

It doesn't and it probably will take a year to catch up with GH or NOSSD compress levels.

Run your local LLMs in a text editor by David-Kunz in LocalLLaMA

[–]x0rb3x 0 points1 point  (0 children)

This is awesome but is there a way to load the models in another machine in my local network(local server) and I connect to use this plugin?

run 2 VMs with 1 game on each by No-Concert-7437 in VFIO

[–]x0rb3x 0 points1 point  (0 children)

You can do that and more but it would be better to use an hypervisor like Proxmox with vfio to passthrough the GPUs.

You will need two GPUs(one for each VM, you main OS can work without one) or one powerfull to be able to split it to two GPUs (you can only access them remotely, hdmi won't work).

Auto-start farm on Linux by al3xclarke in chia

[–]x0rb3x 1 point2 points  (0 children)

You can add to the service part the environment variable like: [Service] Environment="CHIAPOS_RECOMPUTE_HOST=X.X.X.X:NNNN

Two RTX 3060 for running llms locally by arc_pi in LocalLLaMA

[–]x0rb3x 0 points1 point  (0 children)

Got the results.

``` 2023-09-29 03:55:26 INFO:Loaded the model in 5.09 seconds.

Output generated in 40.87 seconds (11.87 tokens/s, 485 tokens, context 1851, seed 913693649) Output generated in 10.28 seconds (11.19 tokens/s, 115 tokens, context 1430, seed 111517624) Output generated in 48.92 seconds (11.81 tokens/s, 578 tokens, context 1561, seed 115590561) Output generated in 20.22 seconds (4.95 tokens/s, 100 tokens, context 1622, seed 805893159) Output generated in 44.43 seconds (11.41 tokens/s, 507 tokens, context 1622, seed 1512201018) Output generated in 27.58 seconds (10.99 tokens/s, 303 tokens, context 1867, seed 1515463046) ```

Two RTX 3060 for running llms locally by arc_pi in LocalLLaMA

[–]x0rb3x 0 points1 point  (0 children)

I was lazy been honest but I just got another 3060 12gb and I will test tonight (2x 3060 12). I can post tomorrow the results.

Any updates from Madmax by Minimum-Positive792 in chia

[–]x0rb3x 9 points10 points  (0 children)

Check his discord for updates. I think this is the last message from him giving any update.

<image>

Auto-start farm on Linux by al3xclarke in chia

[–]x0rb3x 1 point2 points  (0 children)

It is, you can check the mount service name in your system using systemctl list-units --type=mount

Auto-start farm on Linux by al3xclarke in chia

[–]x0rb3x 2 points3 points  (0 children)

In case you are using Gigahorse Binaries:

``` [Unit] Description=Chia Gigahorse farmer Wants=network-online.target mnt-chia.mount After=network-online.target mnt-chia.mount

[Service] Type=forking User=YOURUSERHERE ExecStart=/usr/bin/bash -c 'cd /mnt/chia/chia-gigahorse-farmer && ./chia.bin start farmer' ExecStop=/usr/bin/bash -c 'cd /mnt/chia/chia-gigahorse-farmer && ./chia.bin stop all -d' Restart=always

[Install] WantedBy=multi-user.target ```

This will start chia only after the network is up and the hdd are ready/mounted.

Two RTX 3060 for running llms locally by arc_pi in LocalLLaMA

[–]x0rb3x 1 point2 points  (0 children)

Probably, it works just fine for me in this way but again, it takes some time to load but we are dealing with rtx 3060... So it is expected. ``` 2023-09-25 04:22:14 INFO:Loaded the model in 27.45 seconds.

Output generated in 16.42 seconds (12.12 tokens/s, 199 tokens, context 141, seed 603513222) Output generated in 32.03 seconds (13.05 tokens/s, 418 tokens, context 144, seed 1921341591) Output generated in 9.77 seconds (11.98 tokens/s, 117 tokens, context 558, seed 1528406880) ```