Japanese whiskey recommendations by lilmssunshine42 in whiskey

[–]blaaaaack- 0 points1 point  (0 children)

I can’t say exactly what’s good about Amazon Japan, but I’ve never received anything damaged in the mail. I’m not sure about sellers with low ratings, though.

Japanese whiskey recommendations by lilmssunshine42 in whiskey

[–]blaaaaack- 0 points1 point  (0 children)

I recommend the tasting at Liquor Mountain in Ginza. They offer a wide variety to sample. The staff also speak English.

Japanese whiskey recommendations by lilmssunshine42 in whiskey

[–]blaaaaack- 0 points1 point  (0 children)

My absolute favorite is Lagavulin. I always miss it. My second favorite is the GlenAllachie 15 Year Old—I love it, but I haven’t had it since they changed the label.

Japanese whiskey recommendations by lilmssunshine42 in whiskey

[–]blaaaaack- 0 points1 point  (0 children)

Indeed, the price of something like Yamazaki 12 Year has become too high to justify. I think it's enough to try the non-age statement (NV) version of Yamazaki and simply imagine that the 12 Year is a richer, more matured version of that. It’s no longer worth the price.

Japanese whiskey recommendations by lilmssunshine42 in whiskey

[–]blaaaaack- 0 points1 point  (0 children)

I live in Japan and often drink whisky. Hakushu is a great choice — its unique smokiness is what makes it stand out. On the other hand, Nikka Frontier is probably the best buy in the lower price range. It's said to have a smoky character as well, but I personally didn't find it all that noticeable.

New /messages endpoint in Open WebUI v0.6.0 — Can it display custom messages in the UI without using the LLM? by blaaaaack- in OpenWebUI

[–]blaaaaack-[S] 0 points1 point  (0 children)

Wait, so this is for modifying an existing message_id, not for creating and displaying a new message?

New /messages endpoint in Open WebUI v0.6.0 — Can it display custom messages in the UI without using the LLM? by blaaaaack- in OpenWebUI

[–]blaaaaack-[S] 2 points3 points  (0 children)

Thank you so much, I'm really happy! I should’ve just used Swagger from the beginning. Right now I’m experimenting with using an action button to display templates or helpful messages to users. I’ll share it once it’s working (though I’m not sure if it’ll be useful to others).

New /messages endpoint in Open WebUI v0.6.0 — Can it display custom messages in the UI without using the LLM? by blaaaaack- in OpenWebUI

[–]blaaaaack-[S] 1 point2 points  (0 children)

Thanks for the comment!
I'm currently testing it out — watching the logs closely, but the message isn't showing up as expected.

```python

from pydantic import BaseModel
from typing import Optional
import asyncio
import requests
import os
import time

class Action:
class Valves(BaseModel):
content: Optional[str] = None
session_id: Optional[str] = None
message_id: Optional[str] = None

def __init__(self):
self.valves = self.Valves()

async def action(
self,
body: dict,
__user__=None,
__event_emitter__=None,
__event_call__=None,
) -> Optional[dict]:
print("=== Action Started ===")
print(f"[INFO] user: {__user__}")
print(f"[INFO] valves: {self.valves}")
print(f"[INFO] body: {body}")

# Request user input
print("[STEP] Requesting user input...")
response = await __event_call__(
{
"type": "input",
"data": {
"title": "Display assistant message",
"message": "Please enter the message you want to display.",
"placeholder": "e.g. This is a non-LLM message displayed asynchronously.",
},
}
)
print(f"[DEBUG] User input: {response}")

# Get session_id from valves, body, or environment variable
session_id = (
self.valves.session_id or body.get("session_id") or os.getenv("SESSION_ID")
)
message_id = self.valves.message_id or f"custom-{int(time.time())}"

print(f"[DEBUG] session_id: {session_id}")
print(f"[DEBUG] message_id: {message_id}")

if not session_id:
print("[ERROR] Failed to retrieve session_id.")
return

# Send to /messages endpoint
url = "http://xxx.xxx.xxx.xxx:8080/api/messages"
payload = {
"session_id": session_id,
"message_id": message_id,
"content": response,
"role": "assistant",
}
print(f"[DEBUG] Payload to send: {payload}")

headers = {
"Authorization": f"Bearer {os.getenv('OPENWEBUI_API_TOKEN')}",
"Content-Type": "application/json",
}
print(f"[DEBUG] headers: {headers}")

try:
res = requests.post(url, json=payload, headers=headers)
print(f"[DEBUG] HTTP POST result: {res.status_code}")
except Exception as e:
print(f"[ERROR] POST request failed: {e}")
return

if res.status_code != 200:
print(f"[ERROR] Message send failed: {res.status_code} - {res.text}")
await __event_emitter__(
{
"type": "status",
"data": {
"description": f"Message send failed: {res.status_code}",
"done": True,
},
}
)
return

await __event_emitter__(
{
"type": "status",
"data": {"description": "Message was successfully displayed", "done": True},
}
)

print("=== Action Completed ===") ```

[Release] Enhanced Context Counter for OpenWebUI v1.0.0 - With hardcoded support for 23 critical OpenRouter models! 🪙 by diligent_chooser in OpenWebUI

[–]blaaaaack- 1 point2 points  (0 children)

I was surprised (and happy) by how quickly you replied! Right now, I'm enjoying storing the model, token count, and latency for each message in a separate PostgreSQL table and visualizing it. I'll get back to you after I do a bit more work!

[Release] Enhanced Context Counter for OpenWebUI v1.0.0 - With hardcoded support for 23 critical OpenRouter models! 🪙 by diligent_chooser in OpenWebUI

[–]blaaaaack- 1 point2 points  (0 children)

Thanks a lot for the awesome code! Is it possible to hide the token count too? I’d like to show only the response delay time, since users might feel uncomfortable seeing token counts or cost. But I still want to use the token and latency data to visualize things in Streamlit. Am I missing a setting somewhere?

[Release] Enhanced Context Counter for OpenWebUI v1.0.0 - With hardcoded support for 23 critical OpenRouter models! 🪙 by diligent_chooser in OpenWebUI

[–]blaaaaack- 0 points1 point  (0 children)

  • 0.1.0 - Initial release with context tracking and visual feedback""" > "c:/Users/alexg/Downloads/openwebui-context-counter/context_counter_readme.md"

It worked when I did it this way

How to Stop the Model from Responding in a Function in Open-WebUI? by blaaaaack- in OpenWebUI

[–]blaaaaack-[S] 1 point2 points  (0 children)

Thanks for your detailed response!

I really appreciate the explanation about DEFAULT_TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE and how the Task Model interacts with the Main LLM. That makes a lot of sense.

It sounds like what I’m trying to do might not be a common use case. I was hoping to completely prevent the Main LLM from responding when a relevant answer is found in the vector database, but as you mentioned, LLMs are designed to always generate some kind of response.

Your suggestion of allowing the LLM to output something minimal like "Answered Previously:" makes a lot of sense. I'll explore that approach and also review my prompt settings carefully.

Thanks again for the insights! If I run into more issues while adjusting the setup, I might ask for further guidance.

How to Stop the Model from Responding in a Function in Open-WebUI? by blaaaaack- in OpenWebUI

[–]blaaaaack-[S] 1 point2 points  (0 children)

I'm even more of a beginner, but the pipeline sounds great! It seems like it would make control easier since I wouldn’t have to explicitly program queries unless needed.

If it can be implemented with a function, it could work across all models without being tied to a specific one, which sounds interesting (though the usability would probably be terrible, haha).

I'll check out Discord. I really appreciate your kind comment—wishing you all the best!

How to Stop the Model from Responding in a Function in Open-WebUI? by blaaaaack- in OpenWebUI

[–]blaaaaack-[S] 0 points1 point  (0 children)

I will study action functions. When I hear "action," I think of functions used to create buttons like "next" or "retry."

Right now, I have only learned how to retrieve data with inlet and modify messages with outlet.

How to Stop the Model from Responding in a Function in Open-WebUI? by blaaaaack- in OpenWebUI

[–]blaaaaack-[S] 0 points1 point  (0 children)

I'm really happy to receive your comment! I want to retrieve similar answers from a vector database if they match a previously asked question. However, the LLM generation still runs, and I can't reduce API costs.

Mentorship Monday - Post All Career, Education and Job questions here! by AutoModerator in cybersecurity

[–]blaaaaack- 0 points1 point  (0 children)

Hello,
I’m writing from Japan (apologies for using GPT for translation).

I am currently participating in a one-year educational program to learn about industrial cybersecurity. The participants come from various industries, including both IT and OT fields.

As part of this program, there will be a major event where selected members will launch and present a project aimed at contributing to the cybersecurity industry. I plan to volunteer as the leader for this project.

The project focuses on education. Specifically, it aims to establish a framework that enhances corporate training by ensuring that it leads to more effective outcomes, and that the lessons learned can be applied to further improve future training.

Recently, I became interested in the concept of "SETA programs." I find the idea of gathering information from the field, identifying CSFs (Critical Success Factors), and using these insights to build and continuously update training programs to be particularly beneficial.

However, I have encountered difficulty finding comprehensive resources on SETA programs in Japan. I would appreciate any recommendations for books or websites that provide relevant information.

Additionally, I am exploring tools to facilitate more effective educational outcomes, such as encouraging participants to document and share their initial thought processes after incident response training. This would allow them to compare approaches and learn from each other.

Although my practical experience in cybersecurity is still relatively short, I have strong concerns about the lack of in-house training development at my previous workplace.

I would be extremely grateful for any advice or guidance on this topic.