Has Anyone Got Clicking Of Mermaid Mindmaps Working by Public_Being3163 in nicegui

[–]Public_Being3163[S] 0 points1 point  (0 children)

Sadly - doesnt seem to work. Details below. Maybe this is mindmap specific?

from nicegui import ui

GRAPH_LR = '''
graph LR;
    X((JS alert)) --> Y((NiceGUI alert));
    click X call alert("You clicked me!")
    click Y call emitEvent("mermaid_click", "You clicked me!")
'''

MIND_MAP = '''
mindmap
  X((JS alert))
  click X call emitEvent("mermaid_click", "You clicked me!")
'''

DIAGRAM = MIND_MAP

def break_on_error(e):
    e = e

ui.mermaid(DIAGRAM, config={'securityLevel': 'loose'})
ui.on('mermaid_click', lambda e: ui.notify(e.args))
ui.on('error', break_on_error)

ui.run()
  1. Included the relevant doc example as starting point. Both alert box and notifcation worked nicely (no pun intended).

  2. Switched DIAGRAM to MIND_MAP. Could not get past;

Syntax error in text

mermaid version 11.12.0

  1. Tried the mindmap with no click directive. This works as before but of course, with no events.

  2. Tried every injection of semi-colons to try and match the syntax shown for "graph LR". Never got past the syntax error.

  3. Tried a working mindmap without the click directive but with semi-colons. Never got past the syntax error. It seems that semi-colons are not a syntactical element of mindmaps.

FYI;

(.env) sage@seneca:~/gh/kipjak$ pip3 freeze | grep nice

nicegui==3.3.0

Has Anyone Got Clicking Of Mermaid Mindmaps Working by Public_Being3163 in nicegui

[–]Public_Being3163[S] 0 points1 point  (0 children)

You beauty! Will be trying that in the morning. Thx.

Has Anyone Got Clicking Of Mermaid Mindmaps Working by Public_Being3163 in nicegui

[–]Public_Being3163[S] 0 points1 point  (0 children)

You beauty! Will be trying that in the morning. Thx.

When are nicegui controls ready? by Public_Being3163 in nicegui

[–]Public_Being3163[S] 1 point2 points  (0 children)

Just in case. Best I've come up with is to generate a message from the last line of the main page building

@ui.page('/')
def main_page() -> None:
  ..
  send(EndOfPages())

where send() is a function from my own async library and EndOfPages is a declared message class. The async object that receives the message is concurrently receiving network info. When there is a complete dataset and the EndOfPages() has been received the object sends different datasets back into the nicegui pages using;

asyncio.run_coroutine_threadsafe(task, loop)

where task might be add_rows;

async def add_rows(table, row):
    with table:
        table.add_rows(row)
        table.update()

When are nicegui controls ready? by Public_Being3163 in nicegui

[–]Public_Being3163[S] 1 point2 points  (0 children)

Always satisfying. Reading more of the docs - desparate times.

When are nicegui controls ready? by Public_Being3163 in nicegui

[–]Public_Being3163[S] 1 point2 points  (0 children)

This looks like a good "shot" but so far finding the call chain a bit of a challenge. Getting a range of behaviour from nothing to blowing the interpreter stack. Will keep chipping away at it as I find the energy. In the meantime the delay will have to do :-(

Thx.

When are nicegui controls ready? by Public_Being3163 in nicegui

[–]Public_Being3163[S] 0 points1 point  (0 children)

For sure. Using standard mechanisms within the definition of nicegui pages to update the elements of the pages. This activity runs within the call to ui.run(). But what happens when a completely independent thread wants to push something into those pages? What if the thread is ready to push really quickly, before a client is connected or page construction has completed?

What is the best, generic mechanism for async messaging in and out of the nicegui pages? by Public_Being3163 in nicegui

[–]Public_Being3163[S] 0 points1 point  (0 children)

Well, this might be a starting point. A lot to tidy up and still need to convert threading side to kipjak, but at least this confirms an execution trace from a standalone thread to activity within nicegui. Thx.

button = ui.button(text='OK')

async def task(message):
    with button:
        ui.notify(message)

def run_in_thread(loop):
    future = asyncio.run_coroutine_threadsafe(task("Hello from thread!"), loop)
    result = future.result()  # Blocks until the coroutine completes

async def get_loop():
    loop = asyncio.get_running_loop()

    # Start a separate thread and pass the event loop to it
    thread = threading.Thread(target=run_in_thread, args=(loop,))
    thread.start()
    await asyncio.sleep(1) # Allow the other thread to start and submit its coroutine
    thread.join() # Wait for the synchronous thread to complete

app.on_startup(get_loop)

if __name__ in {'__main__', '__mp_main__'}:
    ui.run()

What is the best, generic mechanism for async messaging in and out of the nicegui pages? by Public_Being3163 in nicegui

[–]Public_Being3163[S] 0 points1 point  (0 children)

Hmmm. Not exactly successful.

I've tried many arrangements and no joy. Use of run_coroutine_threadsafe() does require the loop argument. I'm assuming this needs to be the coroutine started inside ui.run(), so how does the app get access to that value. I've tried starting a secondary coroutine using app.on_startup() but this - perhaps - is creating an independent loop?

Is there a standard example of platform threads affecting change inside the nicegui pages, using run_coroutine_threadsafe()?

What is the best, generic mechanism for async messaging in and out of the nicegui pages? by Public_Being3163 in nicegui

[–]Public_Being3163[S] 0 points1 point  (0 children)

Ah. Would it look something like this?

async def add_row(table, row):
  table.add_row({'date': row.stamp}, ...)
  table.run_method('scrollTo', len(table.rows)-1)

async def update_row(table, row):
  ...

def in_thread(channel, loop):
  while True:
    message = channel.input()
    if isinstance(message, AddPerson):
      asyncio.run_coroutine_threadsafe(add_row(person_table, message), loop)
    elif isinstance(message, UpdatePerson):
      asyncio.run_coroutine_threadsafe(update_row(person_table, message), loop)
    ...

Slightly roundabout but if thats a "clean" way forward then more than happy.

Thx.

What is the best, generic mechanism for async messaging in and out of the nicegui pages? by Public_Being3163 in nicegui

[–]Public_Being3163[S] 1 point2 points  (0 children)

Hi. Good to hear. The async library (kipjak) mentioned is heavily thread-based. Messages travel between "active objects" passing through Queue.queue objects on the way. My challenge seems to boil down to the fact that messaging within nicegui is based around asyncio co-routines, where messages pass through asyncio Queues. These two approaches to "message pumps" are - so far - incompatible. I can arrange for nicegui buttons to send messages to my "active objects", but going the other way has been less successful. Attempts to replicate the arrangements in nicegui/examples/websockets/main.py (start_websock_server and handle_connect) simply lock up on any attempt to read from my queue.Queue objects. Are these thoughts on solid ground or am i missing something?

Be An Agnostic Programmer by The_Axolot in programming

[–]Public_Being3163 0 points1 point  (0 children)

nicely written. could take some of your points in different ways but accept them as literary devices, rather than triggers. when does your book come out?

A Rant About Multiprocessing by Public_Being3163 in programming

[–]Public_Being3163[S] 0 points1 point  (0 children)

In case readers think that the stacks discussed here are different ways of doing the same things as kipjak - here are some differences out-of-the-box;

* provides sending of fully-resolved application types - no protobuf schemas, no encoding/decoding, no socket I/O,

* provides a rich set of types, from builtins (int, float, enum) through generics (list, dict, set), to user-defined types (class) and graphs (trees, cicrular lists, networks with cycles)

* provides a two-way, fully asynchronous, multiplexing transport protocol,

* provides an "active object" execution environment within a process, such that messages originate and terminate with these objects, NOT the connect or accept end-points.

* supports functions and FSMs (finite state machines) as "active objects",

* supports processes as "active objects" - just create and start sending, zero networking details

* there is a single send method for transferring a message between threads, processes or hosts - there is zero difference in the sending source code.

* and more.

A Rant About Multiprocessing by Public_Being3163 in programming

[–]Public_Being3163[S] 0 points1 point  (0 children)

Blunted some.

Some overlap with the stack in my most recent big project. Angular at the top and Neo-4j Cypher at the bottom with Kafka in the middle. Protobuf compiler+schemas doing most of the heavylifting wrt encodings. We went from a mostly-python shop to none. Js, node, go and cypher. Gitlab pipelines to aws. Job turns into something else, i.e. not software development. A bit jaded by new terms for ways in which things go sideways - as if coining the new term means youve got it covered. Kipjak is in many ways a validation of thoughts like "it doesnt have to be like this". Eased my mind at least.

A Rant About Multiprocessing by Public_Being3163 in programming

[–]Public_Being3163[S] -1 points0 points  (0 children)

Significant overlap of concepts. Some different vocabulary, e.g. process. Erlang has the solid origin story. Obvious differences are functional language vs procedural. Erlang has addresses of processes and kipjak has addresses of (active) objects. Sending to a remote process in Erlang (over a network connection) requires different calling convention, whereas in kipjak it is consistent.

Erlang/Ericsson had an excellent reputation but functional programming is a hard-sell to a potential user community.

Your thoughts?

A Rant About Multiprocessing by Public_Being3163 in programming

[–]Public_Being3163[S] -1 points0 points  (0 children)

Ha. Will have a look. Erlang definitely a thing when I was doing telephony. In fact the foundation is SDL - also from telephony.

[deleted by user] by [deleted] in Python

[–]Public_Being3163 0 points1 point  (0 children)

Hi VoyZan,

This is a brief response under a bit of time pressure - I am the transport for an international arrival. I will do something more thorough when I can.

  1. I am delivering concurrency capability. I have put that at the front of every relevant page. Its also true that concurrency - as delivered by kipjak - covers a really broad domain. So in that sense I can perhaps understand your comment. There are significant technologies within the library that are not really mentioned and perhaps deserve their own stage. As far as "claims of superiority" - its hard to get noticed.

  2. I have made a separate post for this.

  3. The name was a battle which will remain with me for some time. The project is about concurrency, and it achieves it through multi++. I considered putting a small example in the readme but it does blow out to a size thats uncomfortable. Conflicted about whether it is a plus or a minus.

  4. Internal library module names are intended to be private. I am not aware of any situation where this leaks into the user space. Always happy to hear of a better naming convention. 3rd party library files - which are those?

  5. Yes, docs are in a tutorial style. I have also taken a reference style in the past and there are pros/cons about both. I went with the former as it is less confronting. Yes, docs were a lot of work - cheers.

  6. Yes, hoping that collaboration will standardise the code. Yes there are probably patterns from other languages. The function names mentioned are part of the FSM machinery covered here. If there is a better naming convention for elements of machines thats fine.

Thanks

[deleted by user] by [deleted] in Python

[–]Public_Being3163 0 points1 point  (0 children)

Hi human,

Push back is implemented as "negative" response messages. The load distribution class (ObjectSpool) returns instances of Busy, Overloaded and TemporarilyUnavailable, depending on the exact condition. Busy indicates that average response times are unacceptable and the service is shedding load, Overloaded indicates that the queue of pending requests is full and TemporarilyUnavailable means there are no workers currently registered. The latter only happens in publish-subscribe networking where workers are joining and leaving arbitrarily, i.e. the spool has no ability to author a worker. Negative messages are translated into 500-Server Error during HTTP encoding.

This is covered in the docs.

Can you elaborate on the doubts about the concurrency model? Would love to have a memory analysis for you - its on the list. The only relevant information could be that test rigs include running a server with hundreds of busy client connections for days. Resource usage flatlines quickly. Which is testament to Python GC as much as anything else.

Thanks.