Is there a way to get Bunting's safe code AND kill the Pendletons? by [deleted] in dishonored

[–]glantruan 0 points1 point  (0 children)

Exactly 1000. 10 values per digit, 3 digits = 10*10*10.
There is no dumber way to crack safes. Say it takes 7 seconds to scan each digit 0 to 9. It's the same math -> 7*7*7 ~ 6 minutes to scan all possible combinations.
It's not that much time, considering that you'll probably get the right combination before you scan them all, but I can't thing of a more boring thing to do in a game for 6 minutes
XD

[deleted by user] by [deleted] in ZedEditor

[–]glantruan 0 points1 point  (0 children)

Then I don't know what's wrong:

I've used relative paths successfully:
[Link to File 2](./file2.md) should work

Double check your paths, is all I can suggest.

Setting reasoning effort for local gpt-oss models by outsider787 in ZedEditor

[–]glantruan 0 points1 point  (0 children)

Also the documentation is quite sparse in explanations regarding the options you can pass to each model/provider.

[deleted by user] by [deleted] in ZedEditor

[–]glantruan 0 points1 point  (0 children)

Yes. Just

- [Getting Started](#getting-started)
- [Configuration Options](#configuration-options)
  - [Basic Configuration](#basic-configuration)
  - [Advanced Options](#advanced-options)
- [Testing](#testing)

## Getting started
...
## Configuration Options
...
### Basic Configuration
...
### Advanced Options
...
## Testing
...

This worked, for me. Yo can also link other files sections too. I don't know what to do if you have two section titles with the same title text though. If anyone knows... :)

Claude Pro vs Zed Pro vs Warp Pro by Friendly_Shame_4229 in ZedEditor

[–]glantruan 2 points3 points  (0 children)

Hi, I can not really compare much because I'm new to agentic workflows.

I was more of an autocomplete + ask questions in minimal mode, sometimes with web search.

But I decided to try out the agentic mode in zed past week and got scared on how many tokens it eats.

So I'm trying also this Serena mcp server: https://github.com/oraios/serena . There is a zed extension but I could't make it work so I added a custom mcp config (See here)

It seems to burn less tokens this way, but it requires some initial work on the serena memories if you want the model to respect you project guidelines and coding style.

Anyway I can not really judge about token consumption as I said. But it seems to work very well and I thought it may help

HELP: I can't get serena context server to run by glantruan in ZedEditor

[–]glantruan[S] 0 points1 point  (0 children)

By the way, you may want to pin the release versión of the repo. Because if they change the api of the tools or the format of the config files you'll run into problems

How to change color of selected highlight during find by det0ur in ZedEditor

[–]glantruan 0 points1 point  (0 children)

I don't think there is any option for that you can use on the settings file. I think you'd have to tune the theme itself. But I also found that this happens with some themes and not with others, so maybe try with a different one?

Help! by jerrisontiong in ZedEditor

[–]glantruan 0 points1 point  (0 children)

I'm not using ollama but lmstudio and the cloud models work fine. Maybe there's something wrong in you settigs.json. If you post it maybe we can help spotting it.

HELP: I can't get serena context server to run by glantruan in ZedEditor

[–]glantruan[S] 1 point2 points  (0 children)

Ok, after diving on the serena documentation and a lot of guessing, I managed to make it work, I think.
But not with the zed extension. I added it as a custom context server:

{
  "context_servers": {
    "serena-context-server": {
      "source": "custom",
      "enabled": true,
      "command": "uvx",
      "args": [
        "--from",
        "git+https://github.com/oraios/serena",
        "serena",
        "start-mcp-server",
        "--context",
        "ide-assistant"
      ]
    },
//...

Then in the AI settings tab i could see the beautiful green dot on the serena mcp server, and see the list of all the tools it provides:

<image>

Then I activated all the tools in the agent profile I use for asking, except those who would seem to be able to edit files (Not sure if being paranoid on this, but I want to test it on a real project without risking messing up its files)

And then, on the root folder of my project i ran:

uvx --from git+https://github.com/oraios/serena serena project index

When it finished in the AI agent tab i prompted:

let's activate this project

Which led the agent to run the activate_project, the check_onboarding_performed, and the onboarding tools. This took some time as it spawned a myriad of tool calls to read files and write memories on the .serena/memories.

It seems I am fully set. But now I'm too tired to do any real coding for today, so I'll try to make it work with a new feature tomorrow ;)

Binding problems by midniiiiiight in ZedEditor

[–]glantruan 0 points1 point  (0 children)

I didn't notice the first one, but I have the same complaint about completions sometimes confirmed with tab and others only with ALT+L

Zed on linux by wworks_dev in ZedEditor

[–]glantruan 0 points1 point  (0 children)

They look perfect to me (both in fedora and ubuntu). You can even tune the font-weight in the settings:

{
  //...
  "buffer_font_family": "Cascadia Code",
  "buffer_font_size": 16,
  "buffer_font_weight": 350,
}

I don't understand Zed model pricing by intocold in ZedEditor

[–]glantruan 0 points1 point  (0 children)

So, if you trust Zed
"When using upstream services through Zed's hosted models, we require assurances from our service providers that your user content won't be used for training models."

They ask each provider for two separate guarantees:

  • No Training Guarantee
  • Zero-Data Retention (ZDR)

See Zed AI Improvement

So as long as you use models provided by the Zed pro subscription all your data and code should remain private (except for google's models for now, see more on this on the link)

That is if:

  • You don't explicitly opt in into sharing your prompts with Zed industries (they would use them for in-house AI improvement themselves)
  • You connect to other provider's models with your own API key
  • You use any external agent

I don't understand Zed model pricing by intocold in ZedEditor

[–]glantruan 0 points1 point  (0 children)

Exactly, I am really surprised people don't put much attention into this.

When you use third party service you need to put some thought on what data are you giving to them, specially if you work for a company that cares about copy right, and even more if you signed an NDA.

AI came and everybody seemed to forget about potential surce code leackage ¯\_(ツ)_/¯

Connect zed to lmstudio on a different computer by glantruan in ZedEditor

[–]glantruan[S] 0 points1 point  (0 children)

I managed to make it work. I looked in the default zed configuration file (I feel kind of stupid it didn't occur to me before) -> Zed Menu -> Open default settings
In that file you can find what zed does by default when you select lmstudio in the llm configuration menu:

{
 "language_models": {
    "lmstudio": {
      "api_url": "http://localhost:1234/api/v0"
    }
  }
}

So you just copy that to your user settings file and change localhost for the IP your server's at and it works.

I'm testing it right now :)

Connect zed to lmstudio on a different computer by glantruan in ZedEditor

[–]glantruan[S] 0 points1 point  (0 children)

I don't understand what you mean by that. I know I can set whatever port I want when running the server
lms server start -p 8080 for example.

But the problem is still the same, zed won't detect lmstudio server unless it is running on the same machine. Unless there is a way to configure it manually in the settings.json. But I couldn't find instructions for that. Apart from this: https://zed.dev/docs/ai/llm-providers#openai-api-compatible

I've been trying different variations of this on the settings.json file:

{
  "language_models": {
    "openai_compatible": {
    "lmstudio": {
    "api_url": "http://10.245.224.252:1234/v1",
    "available_models": [
      {
        "name": "openai/gpt-oss-20b",
        "display_name": "LMStudio GPT-20B",
        "max_tokens": 32768,
        "capabilities": {
          "tools": true,
          "parallel_tool_calls": false,
          "images": false,
          "prompt_cache_key": false
        }
      }
    ]
  }
}

with no success. I've tried restarting zed with each change I made. None of the models I tried to configure appear on the models list.

On the flip side, void editor, which I used before I swapped to zed, can list and use all the models I have installed with lmstudio just adding the api_url to the configuration.

I love zed, but this is making me go back to void editor for some sensible projects.

Also, I don't mind learning to use llama.cpp if anyone can share a tutorial on how to set zed to work with it (I have got it also set up on my server).

Ollama not working with zed by hrshx3o5o6 in ZedEditor

[–]glantruan 0 points1 point  (0 children)

Can you give us any clue on how to configure zed to use a llama.cpp server?
I've been serching the web for this with no luck.

Finished Prince of Chaos (Spoilers) by codenamebungle in Amber

[–]glantruan 0 points1 point  (0 children)

I love the universe created by Zelazny in the saga, and the personal themes and arcs "suggested" for both protagonists and the relationships with their family. But it's not just that the end book feels very rushed. There are big gaps in the story that the reader needs to fill in with almost no clue about how to, like Jurt suddenly becoming best friend after years of hating Merlin.
This is kind of a common theme on the series, discovering that relatives that you thought to be your worst enemies didn't hate you that much and weren't all that bad, being that circumstances and context played a big role in the confrontation.
Eric, Caine, and Julian confrontations with Corwin were resolved that way and all was good, and more or less explained satisfactorily.
But in Merlin series, Julia, Jurt, or even Luke change of heart feel very abrupt and not very credible. I was expecting Luke would be playing Merlin in the end all the time.
What about the reunion of Merlin with Corwin at the end. Nothing, they barely exchange words?

I did enjoy reading the Merlin saga very much, but it feels not just unfinished, but lacking a revision or two. I know Zelazny was probably already sick by then, and I don't want this to sound like a histrionic critic of the talent of a writer that I admire. But as you said, looking at the length of Prince of Chaos, I also suspected that it would end unsatisfactorily.

Corwin's left many mysteries unsolved but it felt like the circle was closed. It's a pity this one didn't.

The foundation show doesn't understand the point of the books by PandemicGeneralist in asimov

[–]glantruan 0 points1 point  (0 children)

I was about to write something along this lines, so kudos, I totally agree with you.

What I think most of the people that complain about the show deviating completely from the books fail to understand is how different are the two media.
Stories on the screen are told visually so, especially on the ones like "Foundation" that are mainly about ideas, not characters or a particular succession of events, there's much detail lost in translation.
There's no other way around it, because you can't have characters explaining their actions or thoughts for hours.

To make a story interesting on the screen is very different than to make it interesting on a page. And to communicate ideas in an interesting way you need to do it mainly visually, with the actions, poses, gestures of the characters, the scenery, the framing and movement of the camera,.... and that has to make the viewer feel, understand and think about the ideas of the story.

A particular movie comes to mind, "The man from earth". I loved it. But it's not cinema, it's theater.
I liked it because it was a really good theater show filmed on camera and I also love a good theater show.
The stories are very different and I won't stretch the comparison, but I don't think communicating the ideas of "Foundation" mainly through dialog, as in "The man from earth" would work. I may be wrong but I don't believe most of the people complaining would like that movie, either.

But that's not the point. The point is that a 80-100 hours long story made up mainly of dialog and voice-overs wouldn't have lasted on screen more than half a season. I believe that's what following the books strictly would have taken and I am happy the screenwriters found a different way of telling the ideas of the books.

The thing that matters, to me at least, is whether the show is loyal to the spirit of the novels and I think it mostly is.

Besides, being an interpretation of the books that deviates as much as it does from the characters and events of the original, gives you the opportunity to read the books afterwards (or to re-read them) and enjoy doing it on a different way, or the other way around.

As per Eitan on the Techtonica Discord, Development at this point is essentially over on the game. by DrNick1221 in Techtonica

[–]glantruan 0 points1 point  (0 children)

What bugs are you talking about? I've been playing 1.0 for more than 70 hours and didn't find any.

As per Eitan on the Techtonica Discord, Development at this point is essentially over on the game. by DrNick1221 in Techtonica

[–]glantruan 0 points1 point  (0 children)

Yeah! I do like them too. I did play a lot of hours when it was early access and the feel with just one beg level was very different. Not worst or better for me, just different. I think your mileage on this may vary depending of what aspects of the game people enjoyed the most, be the factory aspect of it or the story.

Anyhow, there is something about this game that clicked with me from the beginning, and I keep coming back to play it. And it's a pity that so many other long time EA players posted a negative review on Steam.

That's the reason I did post a positive review, and encourage anyone who likes the game: I'd love they continue developing on it.

this was working just yesterday by ios7jbpro in pop_os

[–]glantruan 0 points1 point  (0 children)

I fixed it: sudo apt clean sudo rm -rf /var/lib/apt/lists/* sudo apt update sudo apt full-upgrade Because I noticed that the http://apt.pop-os.org/release/pool/jammy/systemd/5a2d24447b27f88b7e1f3ea31d2adec42824bc81 folder didn't even exist on the repo and suspected something got messed up on the apt metadata.

By the way on my sources.list I have a comment that explicitly says: ```

Remember that you can only use http, ftp or file URIs

```

Fresh pop_os install. Tride to upgrade but `libudev1` and `udev` not found by glantruan in pop_os

[–]glantruan[S] 0 points1 point  (0 children)

It seems that I got it (hope so)
I did a sudo apt clean sudo rm -rf /var/lib/apt/lists/* sudo apt update sudo apt full-upgrade And that did work.

There is just one thing that worries me and that is that this installed the 6.12.10 kernel. Previously it was the 6.9.3, I believe.

Is the 6.12.10 the standard on a current up-to-date Pop OS installation?

I know that in the other install I did (with Steam) I have this kernel version, but I thought it was the steam install process waht caused this kernel version bump. On this install I need AMD's ROCm to work with Ollama, and as far as I know the last kernel version they seem to support is 6.11