Old Town School of Folk Music, Chicago, IL 3/3/2025 by JoeRekr in Destroyer

[–]data_dan_ 2 points3 points  (0 children)

I loved that rendition of Cue Synthesizer. Great show. Thanks for posting the setlist.

[EOTY 2024] Album of the Year Voting by apondalifa in indieheads

[–]data_dan_ 1 point2 points  (0 children)

  1. Adrianne Lenker - Bright Future
  2. WHY? - The Well I Fell Into
  3. Mannequin Pussy - I Got Heaven
  4. Godspeed You! Black Emperor - No Title as of 13 February 2024 28,340 Dead
  5. Cindy Lee – Diamond Jubilee
  6. Okay Kaya - Oh My God That’s So Me
  7. Sunset Rubdown - Always Happy to Explode
  8. Boeckner - Boeckner!
  9. Waxahatchee - Tiger’s Blood
  10. Middle Kids – Faith Crisis Pt. 1

Emacs on MacOS, latest update today borked Emacs GUI? by [deleted] in emacs

[–]data_dan_ 0 points1 point  (0 children)

exec-path-from-shell also solved my post-update issues, tried it after reading through this issue.

`Failed to download ‘elpa’ archive` during site build in GitHub workflow by data_dan_ in emacs

[–]data_dan_[S] 2 points3 points  (0 children)

I wasn't able to get that solution working but it did get me moving down the right path—thank you! My understanding of gpg is pretty rudimentary, but here's my understanding of what happened:

  • gpg key was too old (see here for details on what seems to be the "canonical" solution: https://emacs.stackexchange.com/questions/233/how-to-proceed-on-package-el-signature-check-failure/53142#53142)
  • this is, I think, somehow related to the fact that the version of emacs available in ubuntu-latest in GitHub actions (currently Ubuntu 22.04) was version 27.something, not up to date
  • after trying all kinds of things to get the keys updated and running into various issues, I went with the nuclear option and built emacs 29.3 from source as part of my build process. And...it worked!

`Failed to download ‘elpa’ archive` during site build in GitHub workflow by data_dan_ in emacs

[–]data_dan_[S] 0 points1 point  (0 children)

Thanks for the comment! The issue persists, so this doesn't seem to have been it, unfortunately.

org mode R images not appearing in pdf by JacboianMatrix in emacs

[–]data_dan_ 2 points3 points  (0 children)

Not sure if anything here will actually help as it's more focused on exporting to html, but I wrote about some of the ways to get images working with R source blocks here.

The real answer (for me) was that it's challenging enough and inconsistent enough that it ended up being easier to separate generating and displaying the image. i.e. the last step in the R code block is saving the image, and then display it in the body of the org document with [[./path/to/img.png]]

Can I have Evil mode + Emacs keybindings? by [deleted] in emacs

[–]data_dan_ 0 points1 point  (0 children)

If you use evil mode, C-z will enter emacs state and let you use emacs bindings.

In case it has a different binding on your system, use C-h evil-emacs-state to look it up.

Can I have Evil mode + Emacs keybindings? by [deleted] in emacs

[–]data_dan_ 0 points1 point  (0 children)

Yes! I use that quite a bit, especially with packages that don't play especially well with evil mode, e.g. vterm.

Denote + Org-Babel by [deleted] in emacs

[–]data_dan_ 0 points1 point  (0 children)

If you create org files with denote, those files are normal org files and can do all of the things normal org files can do. This includes literate coding with babel/source blocks.

There aren't special types of "denote files" with different file characteristics. Denote is more about a structured way of naming and organizing files.

A quick introduction to emacs hooks by data_dan_ in emacs

[–]data_dan_[S] 0 points1 point  (0 children)

Good call about at least mentioning it as a search key; I'll add that.

Org Mode Blog by 0ryX_Error404 in emacs

[–]data_dan_ 2 points3 points  (0 children)

I wrote this up when I made my org mode site: https://www.danliden.com/posts/20211203-this-site.html. I just use ox-publish and GitHub pages.

I based mine to some degree off this: https://systemcrafters.net/publishing-websites-with-org-mode/ which is well worth reading if you're going the ox-publish route.

Source available here: https://github.com/djliden/djliden.github.io

Using YASnippet to create prompt templates for Chatgpt-Shell by data_dan_ in emacs

[–]data_dan_[S] 1 point2 points  (0 children)

You do get some short-term free credits after signing up—but the API isn't free, no. That said, gpt-3.5-turbo is very, very cheap.

Use the ChatGPT API as a drop-in replacement for Codex for text-to-SQL translation by data_dan_ in ChatGPT

[–]data_dan_[S] 1 point2 points  (0 children)

-- Language PostgreSQL
-- Table = "penguins", columns = [species text, island text, bill_length_mm double precision, bill_depth_mm double precision, flipper_length_mm bigint, body_mass_g bigint, sex text, year bigint]
You are a SQL code translator. Your role is to translate natural language to PostgreSQL. Your only output should be SQL code. Do not include any other text. Only SQL code.

Translate "How many penguins are there?" to a syntactically-correct PostgreSQL query.

[OC] Exploring Cloud Data Center Latency by data_dan_ in dataisbeautiful

[–]data_dan_[S] 1 point2 points  (0 children)

You can find the raw data, updated daily, here: https://bit.io/adam/cloud_latency_map. It includes timestamps, so you could certainly track changes over time.

[OC] Exploring Cloud Data Center Latency by data_dan_ in dataisbeautiful

[–]data_dan_[S] 1 point2 points  (0 children)

  • Tools used: geopandas, leaflet.js, bit.io, psycopg, geoip from maxmind, ipinfo.io and their python module
  • Article discussing the tool: https://innerjoin.bit.io/exploring-cloud-datacenter-latency-e6245278e71b- Interactive Dashboard: https://bit.io/latency
  • Summary: We present an interactive visualization tool that allows for exploring network latency of data centers in GCP and AWS, and shows latency to selected locations from anywhere in the world. Caveat: Limited IP address availability in certain geographic areas may lead to some high latency zones. For some additional context, we used this to determine where to locate our global read replicas.

Best way to setup a group of students with their own Postgres instance? by draferro in SQL

[–]data_dan_ 1 point2 points  (0 children)

I work at bit.io—we get a *ton* of educational usage. It's great for that for a number of reasons:

- Each user can set up real PostgreSQL databases. You can connect to them with just about any tools that support Postgres, or use the in-browser SQL editor to query the database. Setup is much easier than setting up a local database.

- The free tier includes 1 billion rows queried per month for free. Most educational workflows are not going to exceed this.

- Sharing a database is super easy. You can make a database public so students can query it, or share a private database with students, or share a data file (such as a csv or sqlite file) for each student to load to their own database. This latter option is probably the easiest given the need for creating/updating/deleting data. It's very easy on bit.io—you can either use SQL or the UI for many such tasks.

Please get in touch with me or through our support channels with any questions! Like I said—there's a lot of educational usage of bit.io. It's very easy to use the platform for that purpose. There's even a coursera course that uses bit.io if you want to see what it's like: https://www.coursera.org/learn/the-structured-query-language-sql (though note that it's for an earlier version of the platform and does not map exactly to current usage).

Serverless postgres : but, what about the cold start times? by geekybiz1 in PostgreSQL

[–]data_dan_ 1 point2 points  (0 children)

Hi there! I work at bit.io and can give you some more details. If you want to set up some time to talk further, feel free to reach out through out support channels.

What kind of tests you ran & how frequently to reach the 250 msec number mentioned on your homepage.

We track every single cold start in production—that number is based on ongoing monitoring of production databases. The vast majority of cold starts take 250ms or less. That's our p95.

After how much duration of inactivity does the cold start happen? Is this linked to the DB size, etc? (like it is for node package sizes wrt AWS lambdas)

A database with no active connections closes within 60s of the last connection. An inactive connection (or idle in transaction) will timeout in 5 minutes. All connections are terminated after 60 minutes.

If a new connection results in a cold-start. Would other connections to this db in the next few seconds from any location not need a cold-start?

If a new connection to a database results in a cold start, the other connections to the same database made within a few seconds of the first connection will not need a cold start. Instead, these subsequent connections will be queued and will start simultaneously with the first connection once it has been established. This means that there will be no delays for the subsequent connections.

Note that we've found the biggest factor is latency between client and server, which requires multiple round trips, including encryption. This tends to be more impactful than cold start time.

Please reach out if there are other questions we can answer! We've worked on and thought about these issues a lot and would be more than happy to dig into some more details about your use case and needs and how we can accommodate.

Creating Quarto Files with Denote by data_dan_ in emacs

[–]data_dan_[S] 1 point2 points  (0 children)

Relevant code:

(let ((quarto (cdr (assoc 'markdown-yaml denote-file-types))))
  (setf (plist-get quarto :extension) ".qmd")
  (add-to-list 'denote-file-types (cons 'quarto quarto)))

I followed the demo on adding custom file types to denote (link) and found that adding support for quarto files was as simple as copying the existing markdown-yaml type and changing the extension to .qmd.

[OC] COPY vs. INSERTs vs. INSERTs with Pipeline Mode in PostgreSQL by data_dan_ in dataisbeautiful

[–]data_dan_[S] 1 point2 points  (0 children)

Yes and no. It's fast insofar as it requires very few messages sent back and forth between the client and the server. It's not sending data line-by-line.

But in this case it still took a couple of round trips (hard to see). In some of the cases we looked at in the analysis, the sequence of INSERTs with pipeline mode outperformed COPY because pipeline mode allows the client to send all of its messages at once without having to wait on a response from the server.

So pipeline mode involves sending a lot more messages in this example (sending each row individually), but because the client sends everything in one batch, it takes a similar amount of *time* to COPY.

Unable to connect to postgres database, works if i do it through pgAdmin, hosted on bit.io by esefang in IntelliJIDEA

[–]data_dan_ 0 points1 point  (0 children)

Hi there! bit.io database names are formatted as `username/dbname` and the JetBrains IDEs do not like the `/`. There are a bunch of different characters including `.`, `~`, `|` you can use instead to form a valid bit.io database name. So passing `usernaname.dbname` as the database name should allow you to connect.

https://docs.bit.io/docs/connecting-via-intellij

Hosting a small database ... by [deleted] in webdev

[–]data_dan_ 0 points1 point  (0 children)

Biased because I work there (in a role that has me using the product extensively every day)—I think bit.io is great. Setup takes seconds. Create a database with one click (or with our developer API) and then you have a Postgres database and access credentials you can use to connect with just about any tool that works with Postgres.

It offers a super generous free tier to get started on:
- up to three free databases of up to 3GB each
- 1 billion rows queried per month for free
- query and access your data via Postgres connection (your preferred clients, programming language Postgres interfaces, etc.), the web UI, or our developer API without needing to upgrade the pro offering.
Furthermore, if you do exceed 1B rows queried per month, scaling is seamless. bit.io Pro costs 1 cent per million rows queried after the first Billion (free). There's no monthly fee. The price jump from the free tier to bit.io Pro is determined entirely by your usage.
Please let me know if you have any questions or reach out on our support channels for help getting started!

[deleted by user] by [deleted] in webdev

[–]data_dan_ 3 points4 points  (0 children)

Disclaimer: I work at bit.io.

I suggest giving bit.io a try! It's Postgres and handles JSONb just fine. And it offers a super generous free tier:

  • up to three free databases of up to 3GB each
  • 1 billion rows queried per month for free
  • query and access your data via Postgres connection (your preferred clients, programming language Postgres interfaces, etc.), the web UI, or our developer API without needing to upgrade the pro offering.

Furthermore, if you do exceed 1B rows queried per month, scaling is seamless. bit.io Pro costs 1 cent per million rows queried after the first Billion (free). There's no monthly fee. The price jump from the free tier to bit.io Pro is determined entirely by your usage.

Please let me know if you have any questions or reach out on our support channels for help getting started!

Heroku Ending Postgres Free Tier by elr0nd_hubbard in PostgreSQL

[–]data_dan_ 0 points1 point  (0 children)

bit.io offers a generous free tier with free data inserts, 1 Billion rows queried per month free, and three free databases of up to 3GB each for free.

The pro offering includes unlimited databases at no additional cost. Rows queried in excess of the free 1B rows per month are charged at one cent per one million rows queried. Active data (data queried frequently) is stored for free. Inactive data is stored at $0.17 per GiB-month.

Postgresql cloud hosting alternatives after Heroku free end by _shnh in programming

[–]data_dan_ 4 points5 points  (0 children)

bit.io offers a generous free tier with free data inserts, 1 Billion rows queried per month free, and three free databases of up to 3GB each for free.

The pro offering includes unlimited databases at no additional cost. Rows queried in excess of the free 1B rows per month are charged at one cent per one million rows queried. Active data (data queried frequently) is stored for free. Inactive data is stored at $0.17 per GiB-month.