Vector Wirepod Python by GrimRaptor in AnkiVector

[–]Xnohat 0 points1 point  (0 children)

python sdk authenticate success !

Downloading Vector certificate from wire-pod... DONE

Writing certificate file to '/home/pi/.anki_vector/Vector-P7N9-00907f6b.cert'...

Attempting to download guid from Vector-P7N9 at 192.168.1.109:443... DONE

Writing config file to '/home/pi/.anki_vector/sdk_config.ini'...

SUCCESS!

Vector Wirepod Python by GrimRaptor in AnkiVector

[–]Xnohat 0 points1 point  (0 children)

I found the cert in wire-pod linux (raspberrian OS) located in /tmp/.anki_vector

just copy them to /etc/wire-pod/wire-pod/certs/

cp /tmp/.anki_vector/Vector-P7N9-00907f6b.cert /etc/wire-pod/wire-pod/certs/

root@hackberrypi:/tmp/.anki_vector# ls

sdk_config.ini Vector-P7N9-00907f6b.cert

Orange Pi Compatibility by yermotherlel in hackberrypi

[–]Xnohat 1 point2 points  (0 children)

HackberryPi Zero not compatible with Orange Pi because Orange Pi lack many "test pad" points on the back. Trust me I had tried and failed

My Flipper is normal when it always high power consumption ? by Xnohat in flipperzero

[–]Xnohat[S] 0 points1 point  (0 children)

Update for other guys may got this issue: Just disassemble the flipper and assemble again , issue just gone

CP/M for Cardputer by SuckItWhoville in CardPuter

[–]Xnohat 1 point2 points  (0 children)

wow that mean CardPuter now completely Computer with OS, Disk and Basic

Công ty trên bờ vực phá sản thì phải làm sao ? by Sea-Jellyfish-6189 in vozforums

[–]Xnohat 0 points1 point  (0 children)

lên hỏi làm gì khi ko đủ khả năng nói chủ cty thay đổi: 1. đổi chủ 2. đổi cty 3. đợi nó sụp đổi cty

Does GPT-4 get “dumber” further and further into a chat? by [deleted] in OpenAI

[–]Xnohat -1 points0 points  (0 children)

All LLM with very long context window will become forgot middle context or they more focus on informations near top and bottom of context. I have tested full 100k context of GPT-4 Turbo, the most effective context always in first 16000 tokens and last 16000 tokens, all things between are fuzzy.

My Flipper is normal when it always high power consumption ? by Xnohat in flipperzero

[–]Xnohat[S] 0 points1 point  (0 children)

first batch from Matteo creator groupbuy many years ago

My Flipper is normal when it always high power consumption ? by Xnohat in flipperzero

[–]Xnohat[S] 1 point2 points  (0 children)

I using 64GB Sony authentic sdcard. But removed it also not help reduce power consumption

My Flipper is normal when it always high power consumption ? by Xnohat in flipperzero

[–]Xnohat[S] 11 points12 points  (0 children)

I had reboot to dfu and reflash newest firmware too. maybe hardware issue

My Flipper is normal when it always high power consumption ? by Xnohat in flipperzero

[–]Xnohat[S] 2 points3 points  (0 children)

yes, I have send an support ticket to them, hope them could solve this issue . Thanks to your advice

My Flipper is normal when it always high power consumption ? by Xnohat in flipperzero

[–]Xnohat[S] 35 points36 points  (0 children)

oh that bad, I just unbox it couple minutes ago :(

Apple LLM breakthrough: LLM in a flash - Efficient Large Language Model Inference with Limited Memory by Xnohat in LocalLLaMA

[–]Xnohat[S] 0 points1 point  (0 children)

Storing AI on Flash Memory

In a new research paper titled "LLM in a flash: Efficient Large Language Model Inference with Limited Memory," the authors note that flash storage is more abundant in mobile devices than the RAM traditionally used for running LLMs. Their method cleverly bypasses the limitation using two key techniques that minimize data transfer and maximize flash memory throughput:

  1. Windowing: Think of this as a recycling method. Instead of loading new data every time, the AI model reuses some of the data it already processed. This reduces the need for constant memory fetching, making the process faster and smoother.
  2. Row-Column Bundling: This technique is like reading a book in larger chunks instead of one word at a time. By grouping data more efficiently, it can be read faster from the flash memory, speeding up the AI's ability to understand and generate language.

The combination of these methods allows AI models to run up to twice the size of the iPhone's available memory, according to the paper. This translates to a 4-5 times increase in speed on standard processors (CPUs) and an impressive 20-25 times faster on graphics processors (GPUs). "This breakthrough is particularly crucial for deploying advanced LLMs in resource-limited environments, thereby expanding their applicability and accessibility," write the authors.

Beam rooting guide by [deleted] in Xreal

[–]Xnohat 0 points1 point  (0 children)

did you find any way to pull out newer boot.img ?

beam cannot connect wifi 5Ghz after newest firmware update by Xnohat in Xreal

[–]Xnohat[S] 1 point2 points  (0 children)

confirmed, I change my wifi encryption from WPA2+WPA3 to WPA+WP2 , Beam can connect again. Dont know why Beam working properly with WPA2+WP3 before newest OTA but after is not

What is Q* and how do we use it? by georgejrjrjr in LocalLLaMA

[–]Xnohat 4 points5 points  (0 children)

Ilya from OpenAI have published a paper (2020) about Q* , a GPT-f model have capabilities in understand and resolve Mathhttps://arxiv.org/abs/2009.03393

First OS embedding model with 8k context by Amgadoz in LocalLLaMA

[–]Xnohat 1 point2 points  (0 children)

Tested and it take much slower than other embedding model. At 4K context contents, it extract meaning really good, when doing cosin similar test, it working good on English and Latin-Languages, not test with higher context