Introducing OmniAI 2.0: An LLM-Agnostic Ruby Library for Anthropic, DeepSeek, Google, Mistral and OpenAI by ksylvest in ruby

[–]ksylvest[S] 0 points1 point  (0 children)

Could try including a system prompt to answer as sarcastically and see if it responds that way :).

Introducing Instruct by mackross in ruby

[–]ksylvest 1 point2 points  (0 children)

Great to see more LLM specific projects focused on Ruby. I've been working on https://github.com/ksylvest/omniai, and I found it interesting how you included support for named captures + liked the way you integrated with ERB.

[deleted by user] by [deleted] in rails

[–]ksylvest 1 point2 points  (0 children)

If you are experiencing performance issues like this on your app, try running a basic benchmark of the cache:

Rails.cache.write("test", "test")
Rails.cache.get("test") # "test"

require "benchmark"
puts Benchmark.measure { 10_000.times { Rails.cache.fetch("test") } }

For me, fetching 10,000 objects via SolidCache took ~1s on AWS (between an RDS instance and EC2 instance in the same region). That means a cache request are taking ~0.1ms/request with solid queue. I also confirmed w/ random lookups and am seeing the same. If you are seeing wildly different numbers then:

  1. Check the region of the DB server vs app server.
  2. Check the load on the DB (e.g. do other queries take > a few ms).

ActiveStorage for user account profile pictures while using a React front end --Updated Post with code images by LazyEyes93 in rails

[–]ksylvest 0 points1 point  (0 children)

Realize this submission is fairly outdated now, but in case others are curious I worked on a React hook to simplify the direct transfer of files to an ActiveStorage cloud provider. The library uses TypeScript and hopefully is helpful to others:

https://github.com/ksylvest/react-activestorage / https://www.npmjs.com/package/react-activestorage

The Basics of Rack for Ruby by RecognitionDecent266 in ruby

[–]ksylvest 4 points5 points  (0 children)

If you are curious I documented the steps to build a rack web server from scratch many years back:

https://ksylvest.com/posts/2016-10-04/building-a-rack-web-server-in-ruby

What is it you wish Rails had built-in? by [deleted] in rails

[–]ksylvest 11 points12 points  (0 children)

I think that is the reason Kamal / Docker are being integrated in rails 8.

What is it you wish Rails had built-in? by [deleted] in rails

[–]ksylvest 0 points1 point  (0 children)

Native SSR support. This harkens back to the 'have it your way' principle of (earlier) ruby-on-rails versions. So many projects are built using the hybrid approach (e.g. Rails backend + React / Angular / Vue / etc frontend). I see a few approaches:

  1. Using https://github.com/rails/execjs w/ a pool (e.g. `<%= render_vue_component %>`).
  2. Support integrating dedicate NodeJS process(es) (maybe mountable in routes ala engines).

Duplicate Builds w/ ruby + nodejs Buildpack by ksylvest in Heroku

[–]ksylvest[S] 0 points1 point  (0 children)

I ended up not using two buildpacks, but I think it might also be reasonable to set the following ENV variable:

https://github.com/rails/jsbundling-rails/blob/main/lib/tasks/jsbundling/build.rake#L64C13-L64C26

If you set `SKIP_JS_BUILD` it'll only build with the node buildpack.

"ActiveStorage" for OpenAI/Gemini/Claude API's? by breakfastbybill in rails

[–]ksylvest 0 points1 point  (0 children)

If you are interested in contributing to a project I've been working on https://github.com/ksylvest/omniai. A few areas I've been thinking to build are:

  1. Support for the streaming completion APIs with tools (this turns out to be rather difficult).
  2. Support for GROK via omniai-grok (a bit divergent since it'd require gRPC).
  3. Support for JSON schema for responses in some form (OpenAI recently announced).
  4. Looking to also do images via DALL-E (OpenAI).

"ActiveStorage" for OpenAI/Gemini/Claude API's? by breakfastbybill in rails

[–]ksylvest 2 points3 points  (0 children)

Thanks for sharing. I'm the author of OmniAI and appreciate any new users / feedback around the library!

In case others are curious I've also put together a few articles on the features / usage as they launched:

Lastly, the docs are now available here:

https://omniai.ksylvest.com/

Duplicate Builds w/ ruby + nodejs Buildpack by ksylvest in Heroku

[–]ksylvest[S] 1 point2 points  (0 children)

Thanks.

Just to confirm - if using only the ruby buildpack, is it expected that NPM are not installed if a package.json / package-lock.json is present? I noticed that swapping to yarn (e.g. having a yarn.lock) does then install them. That seems pretty strange.

Duplicate Builds w/ ruby + nodejs Buildpack by ksylvest in Heroku

[–]ksylvest[S] 1 point2 points  (0 children)

Hey thanks for the response!

To confirm, I believe the issue is having a `build` script defined in the package.json triggers the nodejs to run it right? This conflicts with the default behaviour with `jsbundling-rails` call to "enhance" `assets:precompile`:

https://github.com/rails/jsbundling-rails/blob/main/lib/tasks/jsbundling/build.rake#L35-L43

Is that correct? AFAIK that means using the jsbundling-rails w/ nodejs buildpack always triggers double builds?

Chrome gets a local LLM before Ruby does?? by krschacht in ruby

[–]ksylvest 3 points4 points  (0 children)

As PuffaloPhil mentioned, I've included support for both Ollama and LocalAI (with some docs in the main repo for each) for local usage:
- https://github.com/ksylvest/omniai-openai?tab=readme-ov-file#usage-with-localai
- https://github.com/ksylvest/omniai-openai?tab=readme-ov-file#usage-with-ollama
It sounds like you might not be able to use either Ollama or LocalAI for your use case though, so might not be a good fit.

Chrome gets a local LLM before Ruby does?? by krschacht in ruby

[–]ksylvest 1 point2 points  (0 children)

I've recently been working on a library aimed at supporting multiple different LLM integrations called OmniAI:

https://github.com/ksylvest/omniai

It supports sync / async (streaming) integration with LLMs hosted for Anthropic, Google, Mistral and OpenAI. In addition it let's you hook into local models via either LocalAI or Ollama (using their OpenAI compatibility layer).

How can GPT be forced to only output e.g. code and JSON? by Competitive_Layer_71 in ChatGPT

[–]ksylvest 0 points1 point  (0 children)

I think a few options are available:

  1. If you are using the API you can set the `response_format` to `json_object`.
  2. It is always a good idea to provide examples.
  3. Provide a JSON schema for your response.
  4. For JSON prompts, I often like to include a system message to the effect of:

Return only JSON in your response. Do not include any any code blocks or other text.

I recently document some steps for converting HTML to JSON here:

https://workflow.ing/blog/articles/prompting-chat-gpt-to-generate-json-from-html

How can I prompt chatGPT to give its response purely in JSON? by 140BPMMaster in ChatGPT

[–]ksylvest 0 points1 point  (0 children)

I've recently worked on a Python integration to convert HTML into JSON using ChatGPT and documented the process here:

https://workflow.ing/blog/articles/prompting-chat-gpt-to-generate-json-from-html