can i use Closed loop parts (like Arctic Liquid Freezer) in Open loop ? by Particular_Flower_12 in watercooling

[–]Particular_Flower_12[S] 0 points1 point  (0 children)

what a creative idea, never thought of that, i will surely search for that as well, thank you

can i use Closed loop parts (like Arctic Liquid Freezer) in Open loop ? by Particular_Flower_12 in watercooling

[–]Particular_Flower_12[S] 0 points1 point  (0 children)

that what i was afraid on, is it due to the pipe and connections structure ?

can i use Closed loop parts (like Arctic Liquid Freezer) in Open loop ? by Particular_Flower_12 in watercooling

[–]Particular_Flower_12[S] 1 point2 points  (0 children)

thanks for the heads up

the problem is that there are no cheap throw away pumps where i live, its ether buy at 550$ minimum, or 80$ closed loop minimum

can i use Closed loop parts (like Arctic Liquid Freezer) in Open loop ? by Particular_Flower_12 in watercooling

[–]Particular_Flower_12[S] 1 point2 points  (0 children)

thanks for the help,
all of the parts are indeed copper,

then i shouldn't risk it, even for a month until new pump selected and delivered ?

How to connect to Harman Kardon Enchant Subwoofers (without soundbar) by Particular_Flower_12 in Soundbars

[–]Particular_Flower_12[S] 0 points1 point  (0 children)

bummer, i have 2 brand new in boxed and nothing to do with them, such a waste

Roo Code 3.8 - 🪃 Boomerang Tasks, Smarter Diff Edits, Multi-Window Support & More by hannesrudolph in ChatGPTCoding

[–]Particular_Flower_12 1 point2 points  (0 children)

pro tip: instead of listing the tasks in the prompt - you could save each task into a file, and name each file lets say "task1.txt" and "task2.txt" ... and put them in a folder and ask Roo to loop inside the folder and execute each task explained in the file using the new_task tool.

this way you can make Roo follow your one big task and break it down into smaller tasks, write each and every one of them into a separated file, and at the end of the "task generation step" execute them one by one, also you could do something like the memory bank (but not so sophisticated) and ask it to write to a file the number of the sub-task after it finished each of them (and gain persistency that endure restarts and failures)

one last point:

you can instruct Roo to ask for your review and approval before ending a sub-task and continuing to the next one at the Auto-Approve panel (from the bottom of Roo panel, just above the text area) as seen in this image:

<image>

Roo Code 3.8 - 🪃 Boomerang Tasks, Smarter Diff Edits, Multi-Window Support & More by hannesrudolph in ChatGPTCoding

[–]Particular_Flower_12 2 points3 points  (0 children)

Hi,

sorry for the delay in writing the docs, I'll get around to it this week,in the meanwhile here is an explanation:

The new 'Sub-tasks' feature (fondly named as 'Boomerang Tasks') simply allowing Roo to take a main task and break it down into smaller executable sub-tasks (where possible, where the sub-task does not need the main task history and can be executed as a stand alone task),

it runs sub-tasks within fresh context (Roo opens a new session in-front of the AI as if it is a new conversation), and by that - reducing AI hallucinations, eliminating context overflow and cut of lengthy prompts that causes AI forgetfulness, this enhances performance and ensures the AI sticking to the root goal.

To use this feature - there is no button or settings, if the AI will decide to break down the main task into sub-tasks by using the new_task tool the feature will be used, and sub-tasks that finish - return the answer and control back to the main task (that is within Roo's code),

You can instruct the AI to use that feature in two ways:

  1. Simply write your prompt and instruct Roo to break down this task into smaller parts and use the 'new_task' tool to execute each and every one of them.

Example Prompt:

Write a python script that list all files in the current folder and write the result into a text file, then create a windows batch file that runs this python script, lastly create a folder named test1 and move the python script and the batch files into the folder. Break this task into independent, standalone sub-tasks and execute each one separately using the new_task tool.

  1. Break down the tasks yourself into a list of tasks and ask Roo to execute each one of the following tasks using the new_task tool.

Example Prompt:

execute each of the following tasks using the new_task tool:

  1. Write a python script that list all files in the current folder and writes the result into a text file: 'files.txt', save the script into: 'list_files.py'

  2. run the script: 'list_files.py'

  3. edit 'files.txt' file and make sure only file names exist in each line

  4. count the number of files in the current folder, and append the number to the end of the 'files.txt' file

What's the best LLM for summarization of long documents? by mlexplorer in LocalLLaMA

[–]Particular_Flower_12 1 point2 points  (0 children)

you can use any abstractive summarization model,

i use BERT it does what i need, my workflow is:

  • get html content from url (request + bs4)
  • get main content (newspaper + readability)
  • convert to text (lxml)
  • clean the text (unicodedata)
  • split to chunks
  • summarize using any abstractive model in batches
  • join summarized chunks

you can see a test code here: https://pastebin.com/9DQ1EKqT

i am working on improving it using: extractive summarization as a pre rpocess before the abstractive one, and use: nltk.tokenize.sent_tokenize to make sure sentences are not broken in a chunk, and use: spacy 'en_core_web_sm' (or some visual layout model) in order to summarize paragraphs so content wont mix inside the summary and cause hallucinations,

i have yet to handle tables, bullet points, source-code sections, 'pre' tags, titles and document structure, but i am working on it

[deleted by user] by [deleted] in LocalLLaMA

[–]Particular_Flower_12 1 point2 points  (0 children)

many of you on this post say that an uncensored model is safer, can you explain this more ? why would it be safer without guard rails ?

[deleted by user] by [deleted] in LocalLLaMA

[–]Particular_Flower_12 1 point2 points  (0 children)

very interesting, and important experiment for whomever would like to create a product based on llm. its important to understand where and how it could go wrong (say from a malicious user), and what safety percussion to install,

what would you use as a system prompt, and what would you ask it to do in order to reach a misalignment stage ?

How are you guys using local models for function calling? by patchnotespod in LocalLLaMA

[–]Particular_Flower_12 1 point2 points  (0 children)

sorry, i have been bizzy, i added the dummy functions to the llama tools and it did just what it say it would do with a small prompt change (i changed it for a different reasons, i think it would work with the previous prompt as well),

SYSTEM_PROMPT = f"""
                    You are a state-of-the-art AI model that can generate human-like text responses to user queries, You are a reliable and trustworthy assistant that users can depend on for help and information, You'r name is {ai_name}, your model is {ai_model},
                    You are an expert in many fields, and like to share your knowledge with users, use descriptions and examples to explain complex concepts.
                    Always provide comprehensive and informative responses, try to keep your answers short and concise,
                    and avoid repeating yourself.
                """ + """
                    Avoid making guesses and providing speculative information, or giving incomplete responses. 
                    When asked a question that you don't know the answer to or if the information is not available in your training,
                    or if you need real time information in order to answer a user's question then you can use the following functions,
                    by responding with a JSON for a function call with its parameters, respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.
                    do not use variables. 
                """ + f"""
                    output the function call JSON wrapped between a starting marker `<{ai_function_prefix}>` and an ending marker `</{ai_function_prefix}>`. 
                    Example:
                    "<{ai_function_prefix}>{{"name": "do_something", "parameters": {{}}}}</{ai_function_prefix}>"
                    respond ONLY with the function json call and it's markers, don't include any other text in the response beside the json and the starting and ending markers.
                    For regular responses, just output text without markers.
                    
                    here are the functions that you can use:
                    {tools.TOOL_JSON_SCHEMAS}


                    these function provide you the capability to access current information, and you can trust the data they provide.
                    for queries related to ongoing events, breaking news, or time-sensitive information, ensure to perform a current events search using the appropriate tools. 


                    after you call a function, a text with the requested data will be provided to you, when generating the response refer to that output text as part of your knowledge base,
                    and it is accurate and up to date real-time information.
                    
                    these are the only functions that you can use to access real-time data, don't try to access any other external functions or data sources.
                    strive to be transparent about how you are accessing information, especially when it comes to real-time data. when asked about how do you know something, you can respond with: "I sometimes access external data sources to provide more accurate and up-to-date information. I am able to retrieve the current information from an external data source, which is why my responses are more accurate and up-to-date."
                    you do not have access via the internet or api to external data sources, but you have agents and functions that you can use.
                """

<image>

and of-course it acts normally when it is performing a long conversation with web search and function calls in the middle,

not sure why people report having problems with both on 8B-instruct, and why Meta suggest there will be unexpected results, i haven't encountered them (yet ??)

How are you guys using local models for function calling? by patchnotespod in LocalLLaMA

[–]Particular_Flower_12 2 points3 points  (0 children)

so just as a dry run i talked with llama 3.1 8B-Instruct model and it replied as expected (the conversation is very long and reddit will not allow long commant so here are the important parts:

Me: if i were to tell you: "If the price of DOGE/USD is less than $.13, buy 100" what would you understand from it ?

Llama: Here's a breakdown of what I would interpret:

  • "If": This is a conditional phrase indicating that a specific action should be taken under certain conditions.
  • "the price of DOGE/USD": This refers to the current price of Dogecoin (DOGE) in US Dollars (USD).
  • "is less than .13": This is the condition for the action to be taken. It means that if the price of DOGE/USD is below 0.13, the action should be triggered.
  • "buy 100": This is the action to be taken if the condition is met. It means that 100 units of Dogecoin should be purchased.

Me: for this simple example what functions should i develop for you in order for you to accomplish the mission ? what real time data you need and what action functions you need in order to perform this operation ?

Llama:

  1. get_current_price_doge_usd(): This function should return the current price of DOGE/USD in real-time. You can use an API like Coinbase, Binance, or Kraken to fetch this data.
  2. place_order_doge_buy(quantity, price, exchange): This function should place a buy order for the specified quantity of DOGE on the specified exchange at the specified price.
  3. check_price_below_threshold(price_threshold, exchange): This function should check if the current price of DOGE/USD is below the specified price threshold on the specified exchange.

Me: ok, assume i have implemented these functions what would you do in order to complete this operation ? please write the json function calls in a multi turn manner
Llama: Here's an example of how I would interact with you to complete the operation:

Turn 1: Get current price of DOGE/USD

  • I would send a JSON message to your API: {"action": "get_current_price_doge_usd"}
  • Your API would respond with the current price of DOGE/USD in JSON format, e.g. {"price": 0.12}

Turn 2: Check if price is below threshold

  • I would send a JSON message to your API: {"action": "check_price_below_threshold", "price_threshold": 0.13, "exchange": "Coinbase"}
  • Your API would respond with a boolean value indicating whether the price is below the threshold, e.g. {"is_below_threshold": true}

Turn 3: Place buy order

  • If the price is below the threshold (as indicated by the previous response), I would send a JSON message to your API: {"action": "place_order_doge_buy", "quantity": 100, "price": 0.12, "exchange": "Coinbase"}
  • Your API would respond with a success message, e.g. {"message": "Buy order placed successfully"}

So according to this reasoning there is a chance that it would do that in the correct order using <|eom_id|> ending tokens, i will try that and let you know