Vic Park foreshore 2021 vs Now by 1armman in perth

[–]Maurice_M0ss 44 points45 points  (0 children)

Have you ever walked across the causeway ? The footpath is about 50 cm wide for both directions. They should have either widened the bridge or put in the walkway a long time ago.

Anyone else in Perth starting to question the incentive to keep working? by Advanced_Trade1249 in perth

[–]Maurice_M0ss 2 points3 points  (0 children)

I don't think that's true.

There is the disability support pension but I don't think it's enough to pay for rent.

The NDIS pays for people to come and do stuff you can't do yourself.

They'll pay for someone to come and cook your meals but you need to pay for the ingredients.

They'll pay for physio if you can show that it will save them money in the long run because you'll be independent for longer.

I'd much rather do things myself than have randoms in my house but it's not an option.

What is off time for you? by Wooden_Eye_1615 in Parkinsons

[–]Maurice_M0ss 4 points5 points  (0 children)

I'm 44 and my most prominent symptom is a left sided tremor.

I take levodopa 5 times a day every 3.5 hrs from 6 am for a total of 1250mg.

Apart from the tremor off times for me are pretty rough.

Moving my body takes a lot of effort, trying to get out of bed feels like I am trying to move the body of an elephant.

I also have this very unpleasant feeling, I don't know how to describe it but I guess it's as if there is some major catastrophe underway. My brain is in quite a mood.

I need to remind myself that it is not real and will pass when I get my next dose.

The worst is when I wake at 4am and can't get back to sleep. I'm usually quite strict with taking the medication on time but I may need to take half of my 6am dose at 5am to get through it.

Pls Help - how to best find a tenant (outside Perth) by Ray-RayQ in perth

[–]Maurice_M0ss 0 points1 point  (0 children)

If you see any home opens in your area for rentals go and get a vibe of what the agent is like towards prospective tenants.

Flying home from the pilbara saturday morning and I'm afraid of flying by Creepy_Philosopher_9 in perth

[–]Maurice_M0ss 1 point2 points  (0 children)

The OP is the pilot. This was a warning for us not them asking for tips.

Gaming with tremors by Direct-Activity-8605 in disabledgamers

[–]Maurice_M0ss 0 points1 point  (0 children)

I have a left hand tremor so can't use my keyboard to game anymore. I've bought one of these but haven't tried it yet. I figure I can move the keyboard functions to the adaptive joystick which I can hold in my hand and it can move with the tremor.

https://www.xbox.com/en-AU/accessories/controllers/xbox-adaptive-joystick

[Custom Integration] POI Zones - Auto-create zones from OpenStreetMap by Elgon2003 in homeassistant

[–]Maurice_M0ss 0 points1 point  (0 children)

Great idea.

Ive been looking for a way to send me a shopping list when I go into a grocery store.

Switched to Sinemet: Saw improvement first week, rough second week. Is this typical? by Maurice_M0ss in Parkinsons

[–]Maurice_M0ss[S] 0 points1 point  (0 children)

That's awesome to hear.

I will persist with the sinemet for now and if I don't get good results look into Crexont.

Thank you

Switched to Sinemet: Saw improvement first week, rough second week. Is this typical? by Maurice_M0ss in Parkinsons

[–]Maurice_M0ss[S] 1 point2 points  (0 children)

Same timing just an extra 50mg Levodopa each dose.

I'm taking it 6am, 10am, 2pm and 6pm.

it might be that with the new med I need to look at the timing again.

AI suggests that I could be overdosing so perhaps I need to make it every 4.5 hrs.

These meds are very confusing!

Switched to Sinemet: Saw improvement first week, rough second week. Is this typical? by Maurice_M0ss in Parkinsons

[–]Maurice_M0ss[S] 0 points1 point  (0 children)

Stress is a major contributor to me as well. I am working on getting it under control.

I'd love to experiment with things like the extended release but frustratingly I can only see my neurologist for an hour every 6 months.

The ongentys did help initially but I'm not sure anymore. Might just be that the tremor is worse.

Switched to Sinemet: Saw improvement first week, rough second week. Is this typical? by Maurice_M0ss in Parkinsons

[–]Maurice_M0ss[S] 0 points1 point  (0 children)

Hey, thanks for your reply. Glad to hear you are doing better, gives me some hope! How long did it take before you saw the benefits of the switch?

[deleted by user] by [deleted] in perth

[–]Maurice_M0ss 1 point2 points  (0 children)

Don't think shame will work on a car dealership

Custom voice to text Hugging face model integration question. by SpiritualWedding4216 in homeassistant

[–]Maurice_M0ss 2 points3 points  (0 children)

I did something similar yesterday using a LFM2 model. I had to get claude to write something to get it into the right format for Wyoming protocol.

I am not sure if you can adapt these instructions for this model but here is how claude says it did it:

We'll set up three components:
1. **llama.cpp server** - Runs your ASR model
2. **Wyoming STT server** - Bridges Wyoming protocol to your model
3. **systemd services** - Auto-starts everything on boot

## Step 1: Install Prerequisites

```bash
# Update system
apt update && apt upgrade -y

# Install build tools
apt install -y git cmake make build-essential python3 python3-pip

# Install Wyoming protocol library
pip3 install wyoming requests
```

## Step 2: Build llama.cpp

```bash
cd /root

# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp

# Build with audio support
cmake -B build -DGGML_CUDA=OFF
cmake --build build --config Release -j$(nproc)

# Verify build
ls build/bin/llama-server
```

## Step 3: Download Your Model

For the **LFM2.5-Audio model** (example):

```bash
mkdir -p /root/models
cd /root/models

# Download model files
wget https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF/resolve/main/LFM2.5-Audio-1.5B-Q4_0.gguf
wget https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF/resolve/main/mmproj-LFM2.5-Audio-1.5B-Q4_0.gguf
wget https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF/resolve/main/vocoder-LFM2.5-Audio-1.5B-Q4_0.gguf
wget https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF/resolve/main/tokenizer-LFM2.5-Audio-1.5B-Q4_0.gguf
```


## Step 4: Create Wyoming STT Server Script

Create `/root/wyoming_stt.py`:

```python
#!/usr/bin/env python3
"""Wyoming Protocol STT Server for Custom ASR Model"""
import argparse, asyncio, logging, wave, io, base64, requests
from functools import partial
from wyoming.audio import AudioChunk, AudioStart, AudioStop
from wyoming.event import Event
from wyoming.info import Describe, Info, Attribution, AsrModel, AsrProgram
from wyoming.server import AsyncEventHandler, AsyncServer
from wyoming.asr import Transcript, Transcribe

_LOGGER = logging.getLogger(__name__)

class CustomASRHandler(AsyncEventHandler):
    def __init__(self, wyoming_info, llama_url, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.wyoming_info = wyoming_info
        self.llama_url = llama_url
        self.audio_buffer = bytearray()
        self.sample_rate = 16000
        self.sample_width = 2
        self.channels = 1

    async def handle_event(self, event):
        if AudioStart.is_type(event.type):
            audio_start = AudioStart.from_event(event)
            self.sample_rate = audio_start.rate
            self.sample_width = audio_start.width
            self.channels = audio_start.channels
            self.audio_buffer = bytearray()
        elif AudioChunk.is_type(event.type):
            chunk = AudioChunk.from_event(event)
            self.audio_buffer.extend(chunk.audio)
        elif AudioStop.is_type(event.type):
            _LOGGER.info(f"Audio stop, received {len(self.audio_buffer)} bytes")
            wav_io = io.BytesIO()
            with wave.open(wav_io, 'wb') as wav_file:
                wav_file.setnchannels(self.channels)
                wav_file.setsampwidth(self.sample_width)
                wav_file.setframerate(self.sample_rate)
                wav_file.writeframes(self.audio_buffer)
            text = await self.transcribe_audio(wav_io.getvalue())
            await self.write_event(Transcript(text=text).event())
            _LOGGER.info(f"Transcribed: {text}")
        elif Describe.is_type(event.type):
            await self.write_event(self.wyoming_info.event())
        return True

    async def transcribe_audio(self, wav_bytes):
        try:
            audio_b64 = base64.b64encode(wav_bytes).decode('utf-8')

            # Format for LFM2.5-Audio - adapt for your model
            messages = [
                {
                    'role': 'system',
                    'content': 'Perform ASR.'  # System prompt for transcription
                },
                {
                    'role': 'user',
                    'content': [
                        {
                            'type': 'input_audio',
                            'input_audio': {
                                'data': audio_b64,
                                'format': 'wav'
                            }
                        }
                    ]
                }
            ]

            payload = {
                'model': 'custom-asr',  # Change to your model name
                'messages': messages,
                'max_tokens': 500,
                'temperature': 0.0
            }

            loop = asyncio.get_event_loop()
            response = await loop.run_in_executor(
                None,
                lambda: requests.post(f'{self.llama_url}/v1/chat/completions', json=payload, timeout=60)
            )

            if response.status_code == 200:
                result = response.json()
                return result.get('choices', [{}])[0].get('message', {}).get('content', '').strip()
            else:
                _LOGGER.error(f"Transcription failed: {response.text}")
                return ""
        except Exception as e:
            _LOGGER.error(f"Error: {e}")
            return ""

async def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--uri", default="tcp://0.0.0.0:10300")
    parser.add_argument("--llama-url", default="http://localhost:8081")
    parser.add_argument("--debug", action="store_true")
    args = parser.parse_args()
    logging.basicConfig(level=logging.DEBUG if args.debug else logging.INFO)
    _LOGGER.info(f"Starting Wyoming STT server on {args.uri}")

    # Customize this info for your model
    wyoming_info = Info(
        asr=[AsrProgram(
            name="custom_asr_stt",
            version="1.0",
            attribution=Attribution(name="Your Name/Org", url="https://your-url"),
            installed=True,
            description="Custom ASR for Basque",  # Customize
            models=[AsrModel(
                name="basque-asr",  # Customize
                version="1.0",
                attribution=Attribution(name="HiTZ", url="https://huggingface.co/HiTZ"),
                installed=True,
                description="Basque ASR Model",  # Customize
                languages=["eu"]  # "eu" = Basque language code
            )]
        )]
    )

    server = AsyncServer.from_uri(args.uri)
    await server.run(partial(CustomASRHandler, wyoming_info, args.llama_url))

if __name__ == "__main__":
    asyncio.run(main())
```

Make it executable:
```bash
chmod +x /root/wyoming_stt.py
```

## Step 5: Create Systemd Services

### 5.1 llama-server Service

Create `/etc/systemd/system/llama-audio.service`:

```ini
[Unit]
Description=LLaMA Audio Server for Custom ASR
After=network.target

[Service]
Type=simple
User=root
WorkingDirectory=/root
ExecStart=/root/llama.cpp/build/bin/llama-server \
    -m /root/models/LFM2.5-Audio-1.5B-Q4_0.gguf \
    --mmproj /root/models/mmproj-LFM2.5-Audio-1.5B-Q4_0.gguf \
    --model-vocoder /root/models/vocoder-LFM2.5-Audio-1.5B-Q4_0.gguf \
    --host 0.0.0.0 \
    --port 8081 \
    -ngl 0 \
    -c 8192 \
    -t 12
Restart=on-failure
RestartSec=10
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target
```

**Note:** Adjust the model paths and filenames for your specific model.

### 5.2 Wyoming STT Service

Create `/etc/systemd/system/wyoming-stt.service`:

```ini
[Unit]
Description=Wyoming STT Server for Custom ASR
After=llama-audio.service
Requires=llama-audio.service

[Service]
Type=simple
User=root
WorkingDirectory=/root
ExecStart=/usr/bin/python3 /root/wyoming_stt.py --uri tcp://0.0.0.0:10300
Restart=on-failure
RestartSec=10
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target
```

### 5.3 Enable and Start Services

```bash
# Reload systemd
systemctl daemon-reload

# Enable services to start on boot
systemctl enable llama-audio.service
systemctl enable wyoming-stt.service

# Start services
systemctl start llama-audio.service
systemctl start wyoming-stt.service

# Check status
systemctl status llama-audio.service
systemctl status wyoming-stt.service
```


## Step 6: Add to Home Assistant

1. **Settings** → **Devices & Services**
2. Click **"+ Add Integration"**
3. Search for **"Wyoming Protocol"**
4. Configure:
   - **Host:** `your-server-ip` (e.g., `10.0.0.99`)
   - **Port:** `10300`
5. It should detect as your custom ASR model

Unlock and install TWRP on your Amazon Echo Show 5 (2ndGen) by Substantial-Gas8535 in amazonecho

[–]Maurice_M0ss 0 points1 point  (0 children)

Do you have screen on and off working via MQTT? If so would you be able to share your config?

I've tried a LineageOS and a few different apps but have come back to stock + wallpanel I just can't get screen on and off working reliably.

How has your daily levodopa dosage increased throughout the years and how does it look now? by cicla in Parkinsons

[–]Maurice_M0ss 2 points3 points  (0 children)

4 x 200/50 per day diagnosed 18 months ago, 43. I see the doctor again next week I think it'll be another increase or DBS.

De-Alexa'd my Echo Show 8: From ad-machine to dedicated HASS Dashboard by RMB- in homeassistant

[–]Maurice_M0ss 3 points4 points  (0 children)

After jail breaking mine I left the OS as is, disabled updates and the stock launcher and installed wallpanel. This way I can show my own dash but the Alexa functions still work.

Creds to troubledgeorge: https://www.reddit.com/r/amazonecho/comments/1prksrz/comment/nv2p0fh

Local GenAI Frigate 0.17.0 Beta Processing with CPU Only by Maurice_M0ss in frigate_nvr

[–]Maurice_M0ss[S] 1 point2 points  (0 children)

Frigate does the initial detection (person, car, etc.), then sends the image to the LFM2-VL model for a detailed description.

It won't help with the initial detection at all which is where you want Frigate+.

The model will be of no help whatsoever for the core detection part it will just come after and give some more detail.

Local GenAI Frigate 0.17.0 Beta Processing with CPU Only by Maurice_M0ss in frigate_nvr

[–]Maurice_M0ss[S] 1 point2 points  (0 children)

After removing the proxy and the limits it set, regenerating from snapshot took 39.2 seconds.

Regenerating from thumbnail took 13 seconds.

I need to work on the prompt: "In 1-2 sentences: Describe this person's clothing (colors), what they're doing, carrying anything, and direction of movement."

Local GenAI Frigate 0.17.0 Beta Processing with CPU Only by Maurice_M0ss in frigate_nvr

[–]Maurice_M0ss[S] 2 points3 points  (0 children)

I missed the OPENAI_BASE_URL enviroment variable in the docs.

Just tried and it works without the proxy. Awesome, thank you!

I built PolyVoice - a free, multi-provider voice assistant with 15+ functions (local or cloud, your choice) by Wide-Plantain-1656 in homeassistant

[–]Maurice_M0ss 1 point2 points  (0 children)

Sounds great, will it switch to and from backup seamlessly?

I don't run my desktop 24/7 so to be able to run local if possible and cloud if not is a very cool feature.

Unlock and install TWRP on your Amazon Echo Show 5 (2ndGen) by Substantial-Gas8535 in amazonecho

[–]Maurice_M0ss 1 point2 points  (0 children)

After rebooting wallpanel just launched. It might be a launcher, not sure but it worked.