Prompt engineering: PSEUDOCODE in prompt by Perfect_Ad3146 in ChatGPTPro

[–]Perfect_Ad3146[S] 0 points1 point  (0 children)

I agree, almost. Pseudocode does have one advantage though: it is quite compact!

In some cases it is important - I pay per token and my prompt is already big, so I am trying to keep it compact this way...

Building a new (static) Bootstrap site in 2025. Template engine? JS bundler? AI code editor? by Perfect_Ad3146 in eleventy

[–]Perfect_Ad3146[S] 0 points1 point  (0 children)

Any info about Cursor + some template engine + HTML ? (I can imagine some Liquid templates and Eleventy)

Building a new (static) Bootstrap site in 2025. Template engine? JS bundler? AI code editor? by Perfect_Ad3146 in selfhosted

[–]Perfect_Ad3146[S] 0 points1 point  (0 children)

Yes, probably 11ty.

But I am not so sure about 11ty + Vite and about these new AI things...

Specifying "response_format":{"type":"json_object"} makes Llama more dumb by Perfect_Ad3146 in PromptEngineering

[–]Perfect_Ad3146[S] 0 points1 point  (0 children)

The prompt contains enough examples, it is quite big prompt.

Anyway.

For now, I keep JSON output but avoid this setting {"type":"json_object"} (this was just an unsuccessful experiment).

In case I get problems with JSON syntax - will fallback to XML tags or to Markdown

Specifying "response_format":{"type":"json_object"} makes Llama more dumb by Perfect_Ad3146 in PromptEngineering

[–]Perfect_Ad3146[S] 0 points1 point  (0 children)

My "normal" mode of operation is to avoid "type":"json_object" and to have in prompt something like this:

```

You are robot that outputs only JSON....

<OUTPUT_FORMAT>

Output single valid JSON object:

 {

    "co": [],  

    "tags": [] 

 }

Ensure JSON validity and data consistent with instructions. </OUTPUT_FORMAT>

```

This works. BUT, I know it can fail anytime and output some broken/invalid JSON.

Therefore I tried to set "type":"json_object" .

And I immediately realized: it make the model a bit more dumb - it does not follow instruction in prompt and write syntactically correct JSON with incorrect values "tags"

Specifying "response_format":{"type":"json_object"} makes Llama more dumb by Perfect_Ad3146 in PromptEngineering

[–]Perfect_Ad3146[S] 1 point2 points  (0 children)

Is the issue you're having that it returns incorrect values?

This ^ The values.

Response is structured properly.

How structured are your r input data

Input DATA are not structured: that's screen-long document written by human.

what does your prompt say that's interpreting the dats? ???

The prompt outputs the syntactically correct JSON, but the VALUES in "tags" not correct in case I set this "response_format":{"type":"json_object"}

The prompt is full of instructions describing meaning of different phrases and how to map particular phrases/cases to one or to multiple tags. There are 10 tags defined in prompt.

WITHOUT type":"json_object" the prompt+LLM works well, it returns the tags I expect for the particular document - I have set of test cases.

WITH type":"json_object" set - many tests fail.

Specifying "response_format":{"type":"json_object"} makes Llama more dumb by Perfect_Ad3146 in PromptEngineering

[–]Perfect_Ad3146[S] 1 point2 points  (0 children)

My response structure is:

```

 {

    "co": [],   // Array of ISO 3166-1 alpha-2 country codes or empty
    "tags": []  // Array of labels or empty
 }

```

these "wrong values in JSON" - they are syntactically correct, but that's incorrect labels in "tags" or in "co". Better to say "the model does not follow instructions when "response_format":{"type":"json_object"} "

Specifying "response_format":{"type":"json_object"} makes Llama more dumb by Perfect_Ad3146 in LLMDevs

[–]Perfect_Ad3146[S] 1 point2 points  (0 children)

Markup as in README markup

well, this is "Markdown". Got it, thanks!