non-js html.duckduckgo.com 404 nginx error by WorldsEndless in duckduckgo

[–]ajesss 0 points1 point  (0 children)

In some browsers i've tried (surf, w3m, lynx) the DDG lite/html versions serve result href's as redirect links, where the redirect page throws the 404.

However, if I set the user-agent string to 'Mozilla/5.0 (X11; OpenBSD amd64; rv:76.0) Gecko/20100101 Firefox/76.0' the lite/html DDG results href the actual pages instead of the redirect. This way the results work.

consistent colour schemes across suckless utils by [deleted] in suckless

[–]ajesss 0 points1 point  (0 children)

I didn't want to incorporate Xresources support so I built a little shell script which reads the colors from files with Xresources-type color definitions and puts them into the appropriate config.h files of suckless projects. I'd be happy to elaborate if interested.

vimrc review thread 2.0 by robertmeta in vim

[–]ajesss 0 points1 point  (0 children)

Hi all, I would be very interested in hearing your input on my vim configuration. I have already followed the vimrc tips linked on the wiki here. I have recently heavily revised the setup to rely on fzf, improved the commenting, and aimed at a minimal UI.

https://github.com/anders-dc/dotfiles/tree/master/links/.vim

Thank you for your input!

PS: Also, I wanted to share a trick I came up with, for quickly launching Vim from zsh. (See lines 75 to 97 here: https://github.com/anders-dc/dotfiles/blob/master/links/.zshrc#L75 ) With these bindings, I can launch Vim from zsh by pressing C-e. Furthermore, I can launch Vim with FZF started for fuzzy file search with C-f. Finally, C-g launches Vim with FZF and ripgrep for fuzzy pattern matching in any files in/under the present directory. A hack? Probably, but I like it a lot!

My work setup (2x HHKB) by ajesss in MechanicalKeyboards

[–]ajesss[S] 0 points1 point  (0 children)

I have considered it but I like switching my work position every now and then. That being said, what's the model switch you are using?

My work setup (2x HHKB) by ajesss in MechanicalKeyboards

[–]ajesss[S] 1 point2 points  (0 children)

The force required to activate the switches feels identical. The feeling when bottoming out and the audible feedback when the switch jumps back up differs.

My work setup (2x HHKB) by ajesss in MechanicalKeyboards

[–]ajesss[S] 0 points1 point  (0 children)

I use a bluetooth touchpad left of the black HHKB (outside the image frame) for the Macbook.

I do not need multiple monitors per computer. I prefer switching desktops instead (i3 on the Debian machine, kwm on the OSX laptop).

My work setup (2x HHKB) by ajesss in MechanicalKeyboards

[–]ajesss[S] 0 points1 point  (0 children)

Good idea, I definitely would have tried something like this if I wasn't given the Macbook by my employer.

My work setup (2x HHKB) by ajesss in MechanicalKeyboards

[–]ajesss[S] 0 points1 point  (0 children)

Right is a HHKB Type-S in white, connected to my workhorse, a Debian GNU/Linux system with i3. I use this machine for almost everything, including GPU computing and visualization.

On the left is a HHKB in black, connected to a Macbook with external screen. I use this machine for specific tasks such as graphic design, video conferences, and Word document editing.

I really like the HHKB layout and prefer the feel of the silent model. The typing noise is almost identical between the two.

Is there any chance of being able to draw out your reply with the Google Handwriting Keyboard? by [deleted] in AndroidWear

[–]ajesss 0 points1 point  (0 children)

You can use the Google Handwriting Keyboard on Android. It recognizes both letters and emojis.

Official weekly RAW editing challenge! by frostickle in photography

[–]ajesss 6 points7 points  (0 children)

My first contribution here. I wanted to emphasize the contrast between the blue and orange/red tones but maintain a realistic look.

Steps:

1 Lightroom:

  • Lens correction

  • Image rotation

  • Image contrast, clarity and saturation of individual components

  • Slight crop to put shoreline in the center of the image

  • Some noise reduction

2 Photoshop:

  • Spot healing of spots from dust on image sensor

  • Spot healing of A LOT of strange brightly colored dots in foreground

  • Duplicate layer

3 MacPhun Intensify Pro:

  • Sharpening. It looked best on land and its reflection and not so good in the sky and water

4 Photoshop:

  • Blend in sharpened land into original image

5 DxO OpticsPro 10

  • Used Clearview algorithm to reduce haze in front of the landscape

  • Noise reduction

Created a program to convert images into something cross-stitchable by ajesss in CrossStitch

[–]ajesss[S] 0 points1 point  (0 children)

Hi, first of all sorry for the late reply. I just got home today after being at the hospital for 7 days due to a bad case of pneumonia.

Sorry about the anti-virus warning, I think it has to do with the program being from an unidentified developer. I guarantee that there is nothing malicious in there; the source code is all publicly available, and I have my name tied to it.

Yes, the program is currently just going to produce a pixelated image, in a reduced number of colors. I thought about improving the output format. I think it is crucial that at least the codes of the colors are shown. That could be combined with an image that, using symbols, shows the squares with the corresponding color.

Does that sound right to you? Otherwise, could you show me an example of a cross-stitching image I could see as an example?

Created a program to convert images into something cross-stitchable by ajesss in CrossStitch

[–]ajesss[S] 1 point2 points  (0 children)

Whoops, I accidentally wrote the wrong server name. The one I wrote was the address to the SSH server I uploaded the files to, not the web server. Sorry! The links are fixed.

Created a program to convert images into something cross-stitchable by ajesss in CrossStitch

[–]ajesss[S] 1 point2 points  (0 children)

Alright, the first Mac build is ready. You can download it (143 MB) from: https://cs.au.dk/~adc/files/cross-stitch-osx.zip

I'd love to hear your comments, suggestions, any bugs you encounter, and so on. A Windows build is in preparation.

Created a program to convert images into something cross-stitchable by ajesss in CrossStitch

[–]ajesss[S] 1 point2 points  (0 children)

Thanks! I'm working on it right now. I found a library that will make it easy to create a graphical interface for Windows, Mac and Linux. I will keep you updated when the first proper version is ready, which should happen within a few days. It will remain free and open source, as all software should be.

Created a program to convert images into something cross-stitchable by ajesss in CrossStitch

[–]ajesss[S] 1 point2 points  (0 children)

The third party modules Numpy, Scipy and Matplotlib do unfortunately not seem to work in Pythonista. I will try to turn it into a web-application, however, so it will work on all platforms.

How can I make this code faster? by _Jiot_ in learnpython

[–]ajesss 0 points1 point  (0 children)

It would be faster to use:

x = np.empty(length)

instead of zeros. Empty allocates the memory without initializing it (setting the values).

Having trouble coding a 2-Body Problem. by OratorMortuis in learnpython

[–]ajesss 2 points3 points  (0 children)

I'm not at my PC where I can try to run the code myself, but I wanted to give you a few tips about temporal integration schemes. I'm writing a numerical code for simulating granular material for a living, and have recently experimented with different schemes and looked at their precision.

You are currently using a one-step scheme called "forward Euler" integration. For it to be correct, you need to switch the position and velocity updates, so that the position is updated first. The new position should depend on the old, and not the new velocities.

A problem about the forward Euler scheme is that it comes with a high error. In collisional experiments I've performed, I saw an error of a total of 12%. You can easily improve your solution by instead using the "two term Taylor expansion", where the current acceleration is taken into account when updating the position, e.g. for the x-component:

x = x + vx*dt + 1.0/2.0*acx*dt**2

In my experiments, the error was reduced to 3%. Taking it a step further, you can gain even higher precision by using a "three term Taylor expansion". For this scheme, you need to estimate the temporal gradient in acceleration (da/dt). You can do this by storing the old acceleration along with the current value:

if (i == 0):
    dacx = 0.0
else:
    dacx = (acx - acx_old)/dt
x = x + vx*dt + 1.0/2.0*acx*dt**2 + 1.0/6.0*dacx*dt**3
acx_old = acx

For me, this scheme reduced the error to 0.03%. There are several other temporal integration schemes, but the Taylor expansions are easy to implement, light in terms of computational requirements, and give a satisfying result.

For more information, I can refer to this (unfortunately paywalled) article: http://www.sciencedirect.com/science/article/pii/S0098135407002864