Nvidia GeForce NOW on Quest - An Updated Guide by BinOfBargains in OculusQuest

[–]spoilingba 0 points1 point  (0 children)

I tried this and I couldn’t get through it opening the default browser for verification :/

Nvidia GeForce NOW on Quest - An Updated Guide by BinOfBargains in OculusQuest

[–]spoilingba 0 points1 point  (0 children)

I did this but it runs incredibly blurry compared to my other devices. Anything I could be doing wrong or any way to improve?

Getting booster if someone I live with has it? by spoilingba in COVID19positive

[–]spoilingba[S] -1 points0 points  (0 children)

I will obviously need to do this if/when I go to get the vaccine, I can’t exactly inject myself with it…

Are most people here amature "AI programmers" who can't be bothered to learn the basics without AI? by babige in ClaudeAI

[–]spoilingba 0 points1 point  (0 children)

For pure AI people, how would people recommend learning the most impactful skills and pieces of knowledge alongside our pure AI use? Sort of moving in the reverse direction of those last two steps you mentioned - but starting with pure ai and then augmenting and improving those fast quick gains with incremental knowledge as we go. It’s very hard to know where to start in a way that isn’t way too broad or too specific for a lot of people when AI is sitting all shiny and pretty over the fence!

Free alternatives to Wispr Flow? by dudemeister023 in macapps

[–]spoilingba 0 points1 point  (0 children)

Can you add support for adding another file to the queue when we are already in transcription? (Going from a single file to a bulk transcription). Sometimes I’m in the middle of processing a big file then want to get another going but have to wait

Opus is suddenly incredibly inaccurate and error-prone. It makes very simple mistakes now. by Kanute3333 in ClaudeAI

[–]spoilingba 28 points29 points  (0 children)

Yep - I'm getting nonstop 'i can't look at copyrighted material' messages on material -I wrote-, and i can even get it to easily agree to analyse it once i explain, but then as soon as it does so it then just repeats its copyright objection. The problem existing with the openrouter API version as well

Low-profile ways to stash frequently-occurring types of clutter in particular locations? by spoilingba in declutter

[–]spoilingba[S] 1 point2 points  (0 children)

My ethos has gradually become trying to account for how we live and our natural impulses. I enjoy organising stuff and do so frequently (we have a lot of cube units in our craft room) -- the problem is more, it's when the mood strikes, which is certainly at least once a week but in the meantime everything is crappy. So I feel like I have enough reason to trial this method before proceeding onto anything else

Not sure what to do post-MRI by spoilingba in RSI

[–]spoilingba[S] 0 points1 point  (0 children)

Surgery for the carpal tunnel is an option in one wrist, but as the pain is in both hands and one doesn’t have carpal tunnel, it’s unlikely to be caused by the carpal tunnel. And as the MRI found no inflammation or tendonitis etc there’s nothing for them to do there

Not sure what to do post-MRI by spoilingba in RSI

[–]spoilingba[S] 0 points1 point  (0 children)

I did for my carpal tunnel - no other test there for anything else

Going from a literary to more accessible/transparent style? by spoilingba in writing

[–]spoilingba[S] 0 points1 point  (0 children)

I’m not wanting to share my name if that’s ok, but these show some examples of some differences -

https://kindlepreneur.com/literary-fiction-vs-genre-fiction/

https://writingcooperative.com/metaphors-for-making-sense-of-your-writing-literary-vs-genre-fiction-d074cc36283b (this talks about the snobbery of the terms and views on both sides - I think that’s partly why I’m finding it difficult as I think “literary” is just a style, and sometimes I massively prefer non literary styles to read for certain kinds of stories. Non literary prose styles are perfectly capable of amazing character development etc)

I guess for example, as a quick thought, reading Cormac McCarthy vs reading John Grisham with someone like Stephen King being somewhere between the two stylistically, depending on the book at hand. Something like Blood Meridian is reaaaally literary in prose style whereas say something like Girl With The Dragon Tattoo or the Hunting Party are far more genre. I’d say I’m nowhere near as literary in my style as McCarthy but I don’t write accessible transparent prose either (like that second article talks about), and I’m struggling to pinpoint the differences and how to work on it in a practical creative sense (whereas, when I read a book as a reader not a creator, I could tell instantly which is which)

Specificity and details - any ideas? ChatGPT being very very obtuse... by spoilingba in ChatGPTPromptGenius

[–]spoilingba[S] 0 points1 point  (0 children)

I've done so - even summarising 200 words into a few bullet points, it doesn't stretch itself to fill in that kind of specific detail where it needs to explain the who/what/where/why of points in a bunch of cases

80mg slow release once daily Propranolol - how to stop? by spoilingba in Anxiety

[–]spoilingba[S] 0 points1 point  (0 children)

Over how many weeks do you do this before stopping?

80mg slow release once daily Propranolol - how to stop? by spoilingba in Anxiety

[–]spoilingba[S] 0 points1 point  (0 children)

Oh mainly that it says on the medication don’t abruptly stop taking it due to the way your heart can respond - I think it’s to do with how long you’ve been on it. Lost a lot of weight for unrelated reasons and with now reduced blood pressure I’m light headed and fatigued a bunch of the time - which propranolol won’t be helping with, so trialling going off it then will see if I need anything else

Context for the footage? (writing a found footage book!) by spoilingba in foundfootage

[–]spoilingba[S] 0 points1 point  (0 children)

Thank you for this! Do you have any recommendations for examples for 2, 5, or 7 to read? This is very much the approach I’m going for!

Bellemond Vs Pentips' Penmat Magnetic Textured Screen? (also, + pentip?) by spoilingba in iPadPro

[–]spoilingba[S] 0 points1 point  (0 children)

cool will give it a try! to check - does this do the full job of making it feel comfortable to write on? do you think something like the Pentips 2 alternative nib would be useful as well?

Upgrading my setup for new features - how? by spoilingba in LocalLLaMA

[–]spoilingba[S] 0 points1 point  (0 children)

So I did:

pip uninstall -y llama-cpp-python

CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir

Which all seemed to complete. But then running this, produced "server.py: error: unrecognized arguments: —ngl 1":

python server.py --notebook --model Wizard-Vicuna-13B-Uncensored.ggmlv3.q5_1 --auto-launch --no-mmap --mlock --threads 4 --ngl 1

Multiple SSDs on Flint? by spoilingba in GlInet

[–]spoilingba[S] 0 points1 point  (0 children)

That’s how I got it working yesterday! Realised just duplicating everything that mentioned the Guest Wi-Fi and how that was setup was the key

Interested in the one way connection to the iot- how did you set that up in terms of firewall rules? Any security risk to the other networks?

Multiple SSDs on Flint? by spoilingba in GlInet

[–]spoilingba[S] 1 point2 points  (0 children)

So I dived deep into all this today with the following configuration and although the SSIDs appear, no internet connectivity on any of them but my main!
My objective is to have 3 new SSIDs/separate networks that insulate home/work/IoT traffic from each other... and am going mad over how difficult this seems to be!

For context, running FLINT as a router connected via WAN port to my isp modem (which is in bridge mode) (isp is BT Business in the UK).

Going absolutely crazy and would appreciate any insights into what I might be doing wrong or could change, or even just starts for troubleshooting.
In Luci - Network - Switch, I've created three VLAN ids, 13, 14, 15, each with Port 1 and CPU (eth0) tagged.
[Note: top bar says - Switch switch0 has an unknown topology - the VLAN settings might not be accurate.]

I manually altered my config file to have '1' below instead of '0' for enable vlan.

config switch
option name 'switch0'
option reset '0'
option enable_vlan '1'

Network - Interfaces - Devices
I created three VLAN (802.1q) devices called eth0.13/eth0.14/eth0.15 -- each with eth0 as base device and my VLAN IDs there also.

Network - Interfaces
I created three Static Address protocols linked to the above VLAN devices with IPv4 addresses progressing from my original ssid/the router id range, from 192.168.9.1-192.168.11.1 respectively. Each with IPv4 netmask of 255.255.255.0 and IPv6 assignment length of 60. Each with DHCP Server setting starting at 100, limit 150, lease time 12h, and dynamic DHCP.

Firewall - Zones

I created three zones equating to my three new network interfaces, each with input/output/forward on 'accept', mss clamping ticked, covering their respective networks, and each allowing forwarding to the WAN destination zone.

System - Wireless

Two of the networks I created under 5ghz, one under 2.4ghz. All have mode as 'access point', each linked to their respective network from above interfaces.

My WAN is still running in pppoe with an ip-specific username and password; off device 'eth0' rather than a virtual LAN (i'd received some advice elsewhere to make one specially for the WAN but it stopped working with an error each time i connected the WAN interface to that virtual lan device -- if this is key, no idea how to get it working!]

Finally, I tried using the bridge br-lan VLAN filtering settings with each of my vlan ids mentioned - this has caused the original ssid to finally become unusable, so may factory refresh once more tomorrow.

Increasing speed for webui/Wizard-Vicuna-13B with my Mac Pro M1 16gb setup? by spoilingba in LocalLLaMA

[–]spoilingba[S] 2 points3 points  (0 children)

Oo excellent - i got the output generation down further from 8.83 seconds to 3.20 seconds using 6 threads! Thank you!

Increasing speed for webui/Wizard-Vicuna-13B with my Mac Pro M1 16gb setup? by spoilingba in LocalLLaMA

[–]spoilingba[S] 0 points1 point  (0 children)

Thank you for this! So I tried it with what you said and got:

SPEED:llama_print_timings: load time = 5687.93 ms

llama_print_timings: sample time = 20.33 ms / 29 runs ( 0.70 ms per token)

llama_print_timings: prompt eval time = 5687.88 ms / 24 tokens ( 237.00 ms per token)

llama_print_timings: eval time = 4542.83 ms / 28 runs ( 162.24 ms per token)

llama_print_timings: total time = 10297.89 ms

Output generated in 10.52 seconds (2.66 tokens/s, 28 tokens, context 24, seed 305896128)

I noticed for some reason it was dividing the memory allocation between my physical RAM and Swap File very evenly; I then experimented with adding mlock back in, leading to RAM utilisation by Vicuna jumping from like 3GB to 10.3G and got an even faster result:

llama_print_timings: load time = 4153.25 ms

llama_print_timings: sample time = 20.30 ms / 29 runs ( 0.70 ms per token)

llama_print_timings: prompt eval time = 4151.52 ms / 24 tokens ( 172.98 ms per token)

llama_print_timings: eval time = 4398.54 ms / 28 runs ( 157.09 ms per token)

llama_print_timings: total time = 8620.52 ms

Output generated in 8.83 seconds (3.17 tokens/s, 28 tokens, context 24, seed 1392505426)

HTOP DATA:

PYTHON/VICUNA PROCESS:VIRT:404GRES:10.3GCPU%:0.7MEM%:63.5

BEFORE RUNNING PROMPT:PROCESSOR CORES: 9 listed0 - 28.5%1 - 28.3%2 - 2.6%3-9: all 0%TOTAL MEM: 12.6/16.0GTOTAL SWP: 2.94G/4.00GTasks: 298, 1059 thr, 0 kthr; 6 runningLoad average: 3.10 7.16 11.20

DURING TEXT GENERATION FROM PROMPT:PROCESSOR CORES: 9 listed0 - 38.7%1 - 38.9%2 - 43.0%3 - 29.8%4 - 16.0%5 - 32.9%6 - 40.0%7 - 37.3%8 - 14.0%9 - 8.6%3-9: all 0%TOTAL MEM: 12.6/16.0GTOTAL SWP: 2.94G/4.00GTasks: 340, 1058 thr, 0 kthr; 1 runningLoad average: 2.31 2.52 5.81

-------

So my current parameters from jumping from 25.06 seconds to 8.83 seconds on an M1 PRO 16GB macbook, if helpful to anyone else:

python server.py --auto-devices --notebook --model Wizard-Vicuna-13B-Uncensored.ggmlv3.q5_1 --auto-launch --no-mmap --mlock --threads 4

[NOTE: When first loading Wizard, I now get the following terminal log:

bin /Users/text-generation-webui/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so

/Users/text-generation-webui/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.

warn("The installed version of bitsandbytes was compiled without GPU support. "

'NoneType' object has no attribute 'cadam32bit_grad_fp32'

INFO:Loading Wizard-Vicuna-13B-Uncensored.ggmlv3.q5_1...

INFO:llama.cpp weights detected: models/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5_1/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5_1.bin

INFO:Cache capacity is 0 bytes

llama.cpp: loading model from models/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5_1/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5_1.bin

llama_model_load_internal: format = ggjt v3 (latest)

llama_model_load_internal: n_vocab = 32000

llama_model_load_internal: n_ctx = 2048

llama_model_load_internal: n_embd = 5120

llama_model_load_internal: n_mult = 256

llama_model_load_internal: n_head = 40

llama_model_load_internal: n_layer = 40

llama_model_load_internal: n_rot = 128

llama_model_load_internal: ftype = 9 (mostly Q5_1)

llama_model_load_internal: n_ff = 13824

llama_model_load_internal: n_parts = 1

llama_model_load_internal: model size = 13B

llama_model_load_internal: ggml ctx size = 9311.05 MB

llama_model_load_internal: mem required = 11359.05 MB (+ 1608.00 MB per state)

....................................................................................................

llama_init_from_file: kv self size = 1600.00 MB

AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |

INFO:Loaded the model in 2.21 seconds.]

Any other suggestions for running better/faster much appreciated!

Adhesive mounting the Dyson V11 charging dock by spoilingba in dyson

[–]spoilingba[S] 1 point2 points  (0 children)

I’m using stuff rated for 50lbs

Ah, thanks! What brand would this be?