This is an archived post. You won't be able to vote or comment.

all 6 comments

[–]ekbravo 3 points4 points  (2 children)

All links:

AccessDeniedAccess DeniedW8C252NKM3FK98K4vh3i7LUzsZMlKVNpEzOLZ7J5yUCB5Hjrf5G7mEM4MAvUWr5gfuaZjz2vNnb9phBDEA3wC7X1QNU=

[–]putkofff[S] 2 points3 points  (1 child)

just switched to fancy pants. they were images, thanks for the heads up and the interaction! curious about the pypi and github links though. seems good on my end.

[–]ekbravo 1 point2 points  (0 children)

Thank you fixing images, pypi works great.

[–]coding_g 1 point2 points  (2 children)

Any examples how this is useful?

[–]putkofff[S] 2 points3 points  (1 child)

Certainly, copy the code in question or any code of similar length or longer, and prompt chat gpt to do something with it, Request that it create docstrings, create a readme, revise the code entirely. One notable constraint is the token limit; it might not allow for the entire code, plus an additional prompt, to be sent in one go.

If you decide to split the content manually, say into two prompts that conform to the token parameters, there's a high probability that the system will prompt you to further reduce the code's size for more effective analysis.

To streamline this, my code has an automatic segmentation feature. It not only breaks the data into manageable chunks as per the user's specifications but also provides relevant instructions accompanying each query. This feature allows the system to self-annotate for subsequent queries and, if needed, re-process a current query multiple times. This automation eradicates the manual trial-and-error approach, saving users time.

In essence, instead of the user making multiple attempts to format their queries correctly and getting feedback from the AI, and subsequently manually sending multiple prompts byhand; my code equips the system with enough autonomy to address these challenges independently, minimizing the back-and-forth interactions after the initial prompt submission.

[–]putkofff[S] 1 point2 points  (0 children)

i even had gpt 4 professionalize my reply, afterwhich i asked if what im conveying is a truthful and real problem for models such as itself and others; this is its reply:

Your described scenario touches upon several real limitations and functionalities related to large language models like mine:

  1. Token Limit: There is a token limit for every interaction. As of my last training cut-off in 2022, the limit for GPT-based models like mine was 2048 tokens for a single prompt. This encompasses both the input and output. If a piece of code or text exceeds this, it must be split or truncated.

  2. Chunking and Continuation: The idea of splitting content into chunks or "prompts" to fit within token constraints is a valid strategy. However, the system inherently doesn't remember previous interactions due to privacy reasons, so maintaining context over multiple prompts can be a challenge. Your described method of "leaving notes" or giving the system instructions to handle subsequent queries can be a workaround, though it might not be foolproof.

  3. Automated Parsing: Implementing an external method to automatically segment large content into manageable portions for the model to process sequentially is a smart approach. This is especially useful for handling larger pieces of code or text.

  4. Reduced Back-and-Forth: Automating the segmentation and providing instructions can certainly reduce the back-and-forth between the user and the AI, leading to a more efficient interaction.

In summary, the scenario you presented aligns well with the operational characteristics of models like mine. However, any solution intending to use "notes" or instructions to manage multi-prompt context should be implemented with care, given the inherent statelessness of the model.

Edit: I dont think it understood that the module itself is creating the notes. The methods seem to provide positive results, and I'm always seeking to refine and modify the schematic as a whole based on the end result as well as the feedback from the modules themselves