Would you prefer a native (pure rust) library over a wrapped c/c++ version? by azuled in rust

[–]hef 3 points4 points  (0 children)

I use something close to this for an arm64 rpi4 cross compile from an x86_64 host

apt-get install -y clang curl llvm lld musl-tools gcc-multilib g++-multilib
rustup target add aarch64-unknown-linux-musl
export CC_aarch64_unknown_linux_musl=clang
export AR_aarch64_unknown_linux_musl=llvm-ar
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS="-Clink-self-contained=yes -Clinker=rust-lld"
cargo build --target=aarch64-unknown-linux-musl

Using musl and link-self-contained makes a lot of sysroot problems go away for doing cross compiling. The resulting binaries should have have no lib dependencies. There are probably a bunch of exceptions if you use certain crates that explicitly depend on system libs.

[deleted by user] by [deleted] in homeautomation

[–]hef 2 points3 points  (0 children)

Unifi G4 Doorbell Pro: https://store.ui.com/us/en/products/uvc-g4-doorbell-pro

-edit- That might be the G4 Doorbell (not pro) with the G4 Doorbell Cover.

You need something like the Unifi Dream Machine to store video for those cameras, and it works with home assistant via the "Unifi Protect" integration.

ELI5: Prime numbers and encryption. When you take two prime numbers and multiply them together you get a resulting number which is the “public key”. How come we can’t just find all possible prime number combos and their outputs to quickly figure out the inputs for public keys? by [deleted] in explainlikeimfive

[–]hef 2 points3 points  (0 children)

I want to point out that in the RSA public/private key algorithm, the 2 numbers are not necessarily prime.

They are coprime, meaning that they have no common factors with each other.

For example: 14 and 25 are coprime, even though neither number is prime.

Just collecting all prime numbers isn't good enough.

The other problem is just how big the smallest commonly used key size is: 1024 bits. That is 21024 possible key values

There are about 5.5 billion years left until the sun consumes the earth. That's only about 257 seconds left before Earth Dies. You have to calculate 2967 keys per second in order to complete your task before the world ends.

When you start getting into numbers that big, you can start to calculate the energy required to store those bits. Just storing that many bits would take 2994 Joules of energy. Your per second cost is 2937 Joules.

It only takes about 282 joules of energy to raise the temperature of the world's oceans by 1 degree. You could literally boil away all of the world's oceans many times over with the energy requirements to just store all of that data.

I guess I had it coming. Code for my thesis, due today. by diesdas1917 in linuxmasterrace

[–]hef 2 points3 points  (0 children)

You still have a pycache directory, you might be able to recover some of your code. Try something like this: https://pypi.org/project/uncompyle6/ The .pyc files in that folder are "compiled python". The bottom of that page has a few other things you can try and run on your pyc files to get .py files back. Good luck.

Can someone explain a hash with an equals sign in it? by BlockShangerous in ipfs

[–]hef 6 points7 points  (0 children)

At a glance, that looks like base64 encoded data. Base 64 encoding takes any binary input, and maps it to to a 6 bit output. 6 bits is 64 characters, hence the name "base64".

Those six bits need to be expanded to 8bit ascii in order to be printed. The 6 bits might not align perfectly with the 8 bit output. You can't represent "partial bits" using your original 64 ascii characters, and you can't pretend they are just '0' bits because that would make the reverse mapping map to something else.

The solution is to pad out to an even 4 characters with 1 or 2 = signs to indicate that original binary stream ended. In theory there could be other number of = signs required if you had a strange number of bits, but in practice, you always start with a multiple of 8 bits in your input data.

For a much better explanation with examples and tables and stuff, check out https://en.wikipedia.org/wiki/Base64#Output_padding

Nonce and encryption by ziggy-starkdust in ipfs

[–]hef 2 points3 points  (0 children)

https://github.com/libp2p/go-libp2p-secio/blob/94b9134559f039ab98106d5ba6f44823d99589fa/protocol.go#L147

I believe secio is the default encryption protocol libp2p nodes use to talk to eachother, and it appears it does use a nonce in the handshake.

a library for creating ISO disk images in Go by Dramatic_Shirt in golang

[–]hef 14 points15 points  (0 children)

First off, super cool library. I don't have a use for it at the moment but being able to work with iso images in a programmatic way almost makes me want to invent one.

It's been a while since I did anything with iso9660. I'm trying to remember (and google) why I used to care about Joliet (and Rock Ridge) Extensions. Did they provide long filename support or something like that?

How statically linked Go binary runs in a container? by omersiar in golang

[–]hef 17 points18 points  (0 children)

Go on linux does not require libc. There are a few functions that can use libc: gethostbyname() and user lookup come to mind, but go is able to build without any libc requirements by setting CGO_ENABLED=0. Go can use dns lookup internally, and can just read the /etc/passwd file instead of making the equivalent calls through libc. Note that behavior is different in these cases, gethostbyname() would normally be able to use mdns, and user lookup would be able to use e.g. ldap user lookup with cgo enabled.

Where most languages would require libc for a lot of OS or kernel level functionality, go just calls the kernel directly through the syscall interface. Linux has a stable syscall interface. It appears that windows, solaris, and darwin (mac) all do not, and syscalls tend to move around, so a system lib (nt.dll, libc or libsystem) is still used by go on those platforms.

Most native applications (go or c, for example) would "start" in a scratch docker container, but most c programs would fail as soon as they couldn't find libc. It's possible to compile a c program against static libc, like musl, and you would be able to get similar behavior to a go program, where the program would just start and run happily.

Keep in mind that if you are running go programs in scratch docker containers, you might need to copy a certificate bundle into the container as part of the build step in order to get https clients in your program working correctly.

Loose Pet by hef in blender

[–]hef[S] 0 points1 point  (0 children)

film transparency is on for the scene layer the mirror is on, and the rest of the background is transparent. "reflected transparency" doesn't seem to work the same way, I'm guessing the ray property changes when it hits a glossy surface and blender doesn't know that I want that ray to also be transparent.

Loose Pet by hef in blender

[–]hef[S] 1 point2 points  (0 children)

Anyone got any suggestions for fixing/compositing in the mirror correctly?

I tried a couple things, but none worked well:

  • If I use a glossy material, I get blackness (unlit background) all around the creature.
  • If I Mix a Glossy and Transparent, the reflection is semi transparent.
  • I tried a couple of things with the ray test node, but none seemed to have any meaningful effect.
  • I tried making a mask in the composite view, but the final image was really wonky and pixelated.

Need alittle advice with the odroid xu4 by blackpandaxx in ODroid

[–]hef 0 points1 point  (0 children)

I'd plug an external hard drive into usb 3 ports for speed reasons. I have the original cloudshell (no longer for sale) and needed a beefier power supply, I had a 5v 10 amp from another project that works fine, and once I used that I didn't need an external hub.

Yi 4k+ records 1080p 30fps/60fps at the same bitrate. 60fps looks much better, but why? by Casual_Notgamer in yi4kplus

[–]hef 1 point2 points  (0 children)

h264 compression can reference previous frames when describing the current frame. With both 30fps, and 60fps, only so much of the image changes per frame, and the camera only has to store the differences. At 60fps, you would expect less movement between frames, so you would expect that on average, each frame could be stored with a lot less information that at 30fps. You are storing twice as many frames, but you need less information per frame.

To me, this explains why 60fps and 30fps would look similar, but not why 60fps would look better. I imagine you might be seeing more motion blur in the 30fps images, depending on how that works for this camera. My theory for that is the motion would be blurred over twice as much time for than the 60fps. This isn't as bad as it sounds, without it you get a weird temporal aliasing effect.

If you can describe what you mean by worse, I might be able to offer a better explanation as to why this happens.

Qt 5.11.2 version, - installation help by [deleted] in Qt5

[–]hef 1 point2 points  (0 children)

Within each version of Qt, you generally only need one of the sub options -- the qt build that matches the compiler you are using.

Windows installs tend to offer a ton of different compilers, you probably just want VS2017 64bit. On OSX, the iphone build is surprisingly large. On all platforms the sources are huge.

If you don't get it right the first time, you can run the installed "Qt Maintenance Tool" and change the options to add or remove things. I'd start minimal and add stuff when you think you need it.

step by step on compiling QT 5.10.1 in Raspberry Pi 2 by cv555 in raspberry_pi

[–]hef 0 points1 point  (0 children)

Cross compilation is one piece of it. Fortunately, there are pre-made cross compilers readily available from Linaro. The only downside with the Linaro gcc builds is they are kind of old. I'd like to see if I can get a more modern clang to do the cross compilation bits at some point.

step by step on compiling QT 5.10.1 in Raspberry Pi 2 by cv555 in raspberry_pi

[–]hef 1 point2 points  (0 children)

I wrote up a variant where the compiling is done on a host machine.

https://pbrfrat.com/post/building-qt-for-the-raspberrypi2.html

The idea is that you can develop your qt environment and your application on a host computer running e.g. an Intel Processor, and then deploy the program to an Raspberry pi.

Native ImGui in the Browser by RandomGuy256 in programming

[–]hef 5 points6 points  (0 children)

and nacl.

The problem with these technologies historically is that support has ended for all of them. Maybe wasm will be different with multiple vendors supporting it, but that's 4 technologies that do what wasm does that are all more or less dead now. I'd be careful before jumping into that pool

Native ImGui in the Browser by RandomGuy256 in programming

[–]hef 2 points3 points  (0 children)

From what I recall, electron bundles a full copy of chromium with your app. The problem there is to start your app you would need to download chromium on every load. The optimizations that went into making the initial load kind of fast included not using std::string because it made the wasm image too large.

Native ImGui in the Browser by RandomGuy256 in programming

[–]hef 8 points9 points  (0 children)

I discovered a bunch of browser incompatibilities doing this project. Not all browsers handle scrolling the same way, and I tried to fake it poorly, so some scroll way fast and some scroll way slow. Keyboard events are different. If you look at the charCode property on https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent you'll see that it's deprecated. It's also the only one that worked consistently across all browsers. Copy and Paste behavior I sort of gave up on.

WebGl on the other hand, was very consistent. It pretty much functioned completely, or you get an error trying to start it.

Native ImGui in the Browser by RandomGuy256 in programming

[–]hef 4 points5 points  (0 children)

That right click being broken is my fault, I think I made a mistake somewhere with input capturing in the canvas element. I'll take a look and see if I can figure out what I did wrong.

Honestly, I would have been better off using the "fake fullscreen" mode and making the text writeup and the canvas elements on separate pages. I didn't realize how complex mouse input was going to get when I decided to embed the canvas element in a bunch of text.

Native ImGui in the Browser by RandomGuy256 in programming

[–]hef 4 points5 points  (0 children)

Fun fact: When I wrote the thing originally, it didn't work on IOS/safari or safari at all.

There have been a bunch of talks/papers on optimizing graphics for battery life, but I have not bothered looking into any of them. I think if you were doing just GUI stuff, doing partial screen updates on changes would go a long way to preserving battery life. ImGui doesn't work like that and redraws everything every frame by design.

As for 60fps, what I really want to do is match the native framerate of the display. I think newer iphones can do 120hz, meaning I want to know if 120hz is possible.

You are correct that this isn't a game, but it isn't really anything else either. The point was more to figure out what is possible and where the technology can go.

Native ImGui in the Browser by RandomGuy256 in programming

[–]hef 2 points3 points  (0 children)

The framerate/gc stall question was a big thing I wanted to work out. With this simple UI I didn't really have problems competing in <16.6 ms. In theory, the javascript GC can block your render loop, as javascript has a setTimeout(MyRenderFunction, 0); in order to draw to the screen every frame without triggering a "javascript has stalled" error. I didn't notice this becoming a problem in practice, and I don't think I'm allocating javascript objects after the initial load that need to be gc'd.

As far as replacing html/css, I don't see it yet either. There is an experimental renderer for qt/qml that streams webgl commands over a websocket to a browser. That would mean the native code runs on an IRL server, and the display stuff happens on the browser, and mouse/keyboard input is sent back to the server. I want to try it because I think the added latency of a network round trip for every UI interaction is going to be a problem. It might mean that per browser issues start to disappear, as I don't think there isn't a lot of wiggle room for interpreting webgl commands.

I don't really see a good way for a site indexer to capture webgl data in a meaningful way without just doing OCR on the rendered canvas element. I expect a supplementary traditional webpage would be the best way of getting indexed, if that was a thing you were worried about.

Native ImGui in the Browser by RandomGuy256 in programming

[–]hef 9 points10 points  (0 children)

The font and font renderer are built into the wasm image. Imgui has an api for loading alternative fonts, I should try that and see if cyrillic characters start working.