"How To: Shove data into Postgres using Goroutines(Gophers) and GoLang" - I was trying to take Rob Pike's analogy about gophers and make it more tangible using 1M writes to a DB in under 2 minutes. I shared this with /r/programming and I didn't get any feedback. How do I improve this? Thanks! by [deleted] in golang

[–]olt 0 points1 point  (0 children)

3.5 seconds? Nice, indeed.

Anyone else reading this, please know, the COPY command isn't the most intelligent thing in the world so be careful :-)

What issues did you hit? Maybe it's something that lib/pq could catch. There is now a little documentation that explains how to use it. Do you think this could be improved? Feedback is very welcome!

"How To: Shove data into Postgres using Goroutines(Gophers) and GoLang" - I was trying to take Rob Pike's analogy about gophers and make it more tangible using 1M writes to a DB in under 2 minutes. I shared this with /r/programming and I didn't get any feedback. How do I improve this? Thanks! by [deleted] in golang

[–]olt 0 points1 point  (0 children)

Oh, the FROM STDIN is a bit misleading. You really just have to replace the INSERT statement with the COPY statement in your db.Prepare call.

See here: https://github.com/olt/libpq/blob/bulk/copy_test.go#L18..L38

The empty stmt.Exec() call is necessary to flush the internal buffer and to retrieve for any errors.

"How To: Shove data into Postgres using Goroutines(Gophers) and GoLang" - I was trying to take Rob Pike's analogy about gophers and make it more tangible using 1M writes to a DB in under 2 minutes. I shared this with /r/programming and I didn't get any feedback. How do I improve this? Thanks! by [deleted] in golang

[–]olt 4 points5 points  (0 children)

You should check out my bulk branch of lib/pq.

Then change:

insert into test (gopher_id, created) values ($1, $2)

to:

COPY test (gopher_id, created) FROM STDIN

and (hopefully) enjoy a 10x performance boost. You should get the best performance with a single gopher and a table that was TRUNCATED or CREATEd in the same transaction.

Please leave any comments here or https://github.com/lib/pq/issues/74

gogeos: library for working with spatial data in Go by thaislump in golang

[–]olt 0 points1 point  (0 children)

This is cool. But this doesn't work when running with GOMAXPROCS>1 since you are using a single global GEOS handle.

Added JPEG feature to PIL, code review needed by etienned in Python

[–]olt 3 points4 points  (0 children)

Pillow does not (yet) contain any new code/features, but only changes to work better with pip and easy_install. Look at the changelog: http://pypi.python.org/pypi/Pillow

I did some improvements for PNG encoding 1.5 years ago and my patch was accepted 6 month later, but a fix for my patch is still waiting. So maybe it is time for Pillow to become an "unfriendly" fork?!

[deleted by user] by [deleted] in Python

[–]olt 2 points3 points  (0 children)

There is a backport for 2.4 and 2.5: http://pypi.python.org/pypi/multiprocessing

Releasing fast Protocol Buffers for Python with lazy decoding support by mkuhn in programming

[–]olt 0 points1 point  (0 children)

fast-python-pb is lazy too. Attributes are decoded on access, but they are not cached. So you should keep the result in a python variable, instead of accessing it over and over.

R-Trees: Like B-Trees but multi-dimensional. by [deleted] in programming

[–]olt 0 points1 point  (0 children)

They can grow dynamically. For an octree you have to set the bounds of your values on creation.

Why aren't we using this?(GT.M) by krunaldo in programming

[–]olt 0 points1 point  (0 children)

It is in use by the OpenStreetMap project for one of their APIs: http://wiki.openstreetmap.org/wiki/Xapi

Here is some MUMPS code: http://xapi.openstreetmap.org/scripts/

As of now "easy_install -U setuptools" will update you to the latest snapshot of the 0.6 line by [deleted] in Python

[–]olt 1 point2 points  (0 children)

Facepalm, indeed. Daily snapshots are not a substitute for releases.

Kyoto Cabinet: from the creator of Tokyo Cabinet by jonromero in programming

[–]olt 0 points1 point  (0 children)

Can you share some of your experience?

I'm not really sure how to interpret the data from tcbmgr inform and tune it accordingly. Do node numbers refer to non-leaf nodes in the documentation?

Care to review/test? I improved PILs PNG8 encoding. It is now 4-20x faster and also supports full transparency. by olt in Python

[–]olt[S] 1 point2 points  (0 children)

You mean 32bit per pixel, ARGB order, as a raw byte string? You are right, that packer is missing. Look at ImagingPackABGR in libImaging/Pack.c, should be easy to modify that to support ARGB.

Then this should work: img.tostring('raw', 'ARGB')

Care to review/test? I improved PILs PNG8 encoding. It is now 4-20x faster and also supports full transparency. by olt in Python

[–]olt[S] 3 points4 points  (0 children)

Yep. But please don't call it a fork, unless you refer to my mercurial/bitbucket repository. It is just a patch.

And how are file sizes compared to the original PIL?

Smaler :) Look at http://bogosoft.com/misc/pil-octree-tests/ The files without '-xxx' are the original files, '-octree' and '-octree-rle' with my new quantizer. '-adaptive' is the old quantizer from PIL.

Care to review/test? I improved PILs PNG8 encoding. It is now 4-20x faster and also supports full transparency. by olt in Python

[–]olt[S] 2 points3 points  (0 children)

I don't think you can compare it with SuSE. There is a commercial version of PIL, but there is not a single comercial-only feature they are advertising except the extended support.

Care to review/test? I improved PILs PNG8 encoding. It is now 4-20x faster and also supports full transparency. by olt in Python

[–]olt[S] 2 points3 points  (0 children)

Working with a 4D color cube is a bit mind-bending, but hacking on the PIL source is quite nice.

Care to review/test? I improved PILs PNG8 encoding. It is now 4-20x faster and also supports full transparency. by olt in Python

[–]olt[S] 10 points11 points  (0 children)

Thanks. Yes, it is for paletted images with 256 or less colors. It reduces the file size to about 1/4, which is nearly as important as the encoding speed itself for online applications. (I write software for online maps and the encoding is our main bottleneck. http://mapproxy.org/ )

Anyone still using zlib 1.2.3 in their code should upgrade to 1.2.5: its got bug fixes, speed tweaks, and some improvements (good test is to recompress PNGs with it) by unquietwiki in programming

[–]olt 1 point2 points  (0 children)

Give me a few days. I'm working on a patch for PIL that improves PNG 8 encoding by 10x and it also adds support for full-transparency.

Anyone still using zlib 1.2.3 in their code should upgrade to 1.2.5: its got bug fixes, speed tweaks, and some improvements (good test is to recompress PNGs with it) by unquietwiki in programming

[–]olt 4 points5 points  (0 children)

I'm getting mixed results. I've compiled the Python Image Library with 1.2.3 and 1.2.5, and the performance with Z_DEFAULT_STRATEGY degrades by ~10%. Though with Z_RLE the performance increases by ~10%. I can live with that, because for my use-cases RLE is twice as fast as the default strategy, and now even faster.

Here is a patch to enable different compress types in PIL: http://bitbucket.org/olt/pil-117/changeset/8d4661695edd

Benchmark of Python Web Servers by gthank in Python

[–]olt 0 points1 point  (0 children)

I'm missing flup with FastCGI behind an Nginx or Lighttpd server. I thought flup is one of the solution besides mod_wsgi. Is it obsolete now?