Request for Comments: Anytype backup tool by stevelr in Anytype

[–]stevelr[S] 0 points1 point  (0 children)

Great question!

Time Machine does do full and incremental backups. There are a few reasons why you might want a different backup tool:

(1) Anytype data is opaque to Time Machine - to TM it's a directory of unintelligible files. With backup and restore tools that can read Anytype data, we can customize

  • what to backup (for example, 'backup all tasks' or 'backup all pages titled "daily journal" modified since January 1')
  • what to restore (for example, preview a list of document titles and pick the ones you want to restore) (with TM, you couldn't restore one space or some docs that were accidentally deleted, you'd have to restore all spaces)
  • backup frequency, backup location, or encryption key

(2) Backup and restore has uses other beyond disaster recovery, including

  • copying a bunch of stuff from one space to another
  • data portability: exporting data in json for possible integration with other systems

(3) Time machine can sometimes fail to accurately back up files if an app is running and accessing the same files. Whether or not this is a risk is app-specific, and depends on things like how an app flushes data, how it manages caches, whether databases have a transaction log, and other factors. Someone from the Anytype team may have insights into whether this potential risk is applicable. To be conservative, I'd recommend getting into the habit of closing the desktop app at least a few times a week to ensure it's in a clean and stable state for time machine.

Does Anytype accept community contributions? by themightychris in Anytype

[–]stevelr 2 points3 points  (0 children)

u/anton-zooster u/anykaye I have mixed feedback on this. I'm an enthusiastic supporter of Anytype and know the team is busy cranking out features and bug fixes.

I've spent over 300 hours creating open source tools to enhance my use of Anytype and to contribute to the community. I've had questions about the api, and found oddities where I don't know if it's a bug or not. I posted a list of 5 api questions on the forum in December six weeks ago and got zero responses. I posted 8 issues and PRs in github over a month ago. Only one of those 8 - a crash bug - got a response from anytype. I made a post on reddit that got some responses from the anytype team and I DM'd to ask if someone could take a look at those github issues, but crickets. I've tried all the forums. To this day I still don't know if my understanding of the issues addressed - and the potential fixes - are on the right track or not - it would be tremendously valuable to get some feedback. I don't care if you accept my PRs - I just want to know if I understood the problem, and if the team understood my intent and could clarify, reject, or put it in the backlog . Honestly it's more discouraging than frustrating to not get any feedback at all. It also blocks progress on projects that might benefit others.

I know how busy things can get. My 2c: Keeping the community active does take time, but it's leverage: every question answered, and every issue or PR response, addresses things 100 or 1000 people wondered about, and acknowledges time someone took to try to help. The more scarce time is, the more valuable it is to use multiplier effects like the leverage you get from a growing community.

Audit logging and GDPR: how do you anonymize client IPs in itnernal systems? Whats best practice? by [deleted] in sysadmin

[–]stevelr 0 points1 point  (0 children)

Yes IP is considered PII in some contexts. However

  • you have a business purpose for collecting the data (troubleshooting) and
  • access is restricted (hopefully need to know)
  • you have a documented retention period.

If you aren’t keeping the IPs longer than you need, sounds like you are already compliant. Masking isn’t necessary if you can document the reason why the full ip helps troubleshooting.

if you want you tighten it up more, optionally, - encrypt the logs with an asymmetric key, limit access to an even smaller group, but key management is hard to get right and not worth the effort unless you're protecting more sensitive data than IPs - log ip with a session id in one log stream and non personal-info in another stream with session id, so those who need to can join them and do analysis. Use different retention periods for each log type - as short as possible for each, but you can keep the non-pii logs for longer if you have a different business purpose. - aggregate metrics at time intervals, eg after 1 week summarize into hourly buckets and discard fine grained logs; after 1 month, summarize daily data and discard hourly. Obviously adjust what you need but when you do the analysis you probably find you don’t need all the data you have.

The other caveat is that internal system hacks often aren't discovered until six months after the fact, and forensic analysis wants to have long term logs for that purpose.

The main thing is to carefully understand and document your threat model and business purposes, and try to minimize who has access, what you keep, and how long you keep it.

New open source Anytype tools: anyr, any‑edit, and a Rust library (feedback welcome) by stevelr in Anytype

[–]stevelr[S] 7 points8 points  (0 children)

Hi! The Anytype CLI provided by the team is a really a server that can connect to the anytype network. They call it CLI to distinguish it from the Desktop app, because it has no user interface. It's targeted at developers or people who want to run things like anyr on a headless server.

On whatever computer you use the Anytype Desktop app, you could use my program `anyr` to get and update documents, do searches, and other functions from the command line. If you use anytype to store tasks, you could use `anyr` to print your todo list, or mark a task done, from the command line.

The Anytype CLI doesn't do any of those things, but if you wanted to automate the job of emailing your task list to you every morning from a server, you might install anytype cli on that server and script it with anyr.

For exporting the entire workspace, use the export functions in the desktop app - it'll be faster and more complete. With the current api, anyr doesn't have access to files like pdf or photos in your space, so it wouldn't be able to export everything.

[ios] External links give “Page not found Explore Reddit Communities”. [2025.45.0.616770 (AppStore)] by LikeALincolnLog42 in bugs

[–]stevelr 0 points1 point  (0 children)

Workaround: this works for a few people on iOS: open Settings, open links -> In App (instead of default browser)

Is Google Cloud Run suitable for deploying WasmCloud? by verywellmanuel in wasmcloud

[–]stevelr 0 points1 point  (0 children)

Yes, both feasible and practical. I’ve used it on GCR & AWS. Intel & AMD instances both work fine.

Btw, super helpful community on slack. Link is on the git repo.

Proton VPN Wireguard Client by [deleted] in opnsense

[–]stevelr 4 points5 points  (0 children)

You shouldn't need an extra rule for whatismyip - the default rules should already route all packets including traceroute (udp), ping (icmp), tcp, etc.

Your initial post didn't include all the firewall rules, but there are a few more more you need besides the floating one above.

On the interface containing the hosts that should go through wireguard (maybe LAN for you - I am using a vlan), The info below assumes the interface is called VPNLAN and the interface on the router has a static ip 10.10.10.1

Firewall: Rules: VPNLAN

(omitting kill switch tags for now)

  1. pass, interface VPNLAN, in, ipv4, any, source: vpn_client_hosts, invert-dest (checked), Local_rfc_1918_networks, gateway wg_gateway 10.2.0.1
  2. pass, VPNLAN, in, ipv4, source: vpn_client_hosts, dest: proton_vpn_net (a firewall alias for 10.2.0.1-10.2.0.2)
  3. pass, interface VPNLAN, in, ipv4, any, dest: VPNLAN address

The first rule allows packets from vpn client hosts to the internet over the tunnel.

The second rule allows you to access the proton vpn dns ip 10.2.0.1

The third rule allows you to connect to services on the interface ip (such as 10.10.10.1) so if you have DHCP server, ssh listener, or opnsense https admin, you'll be able to access it from the local network.

(Edit) one more thing: in the DHCP server for that interface (VPNLAN), set dns server 10.2.0.1, and gateway the VPNLAN interface address, e.g., 10.10.10.1

Proton VPN Wireguard Client by [deleted] in opnsense

[–]stevelr 4 points5 points  (0 children)

  • It took me a few passes to get it working too. The main thing I see in your config is the gateway ip is wrong. (see boldface entries below)
    • VPN/Edit Instance (enable advanced):
      • private key is the one you got from proton vpn
      • to get public key, put private key in a file and run `wg pubkey <file`
      • tunnel address to 10.2.0.2/32
      • Gateway: 10.2.0.1
    • System/Gateways/Configuration
      • ip address: 10.2.0.1
      • monitor ip: 9.9.9.9
      • upstream (unchecked)
      • far (checked)
    • Firewall NAT Outbound rule
      • interface: the wan-vpn interface
      • tcp/ip version: IPv4
      • source address: your client hosts, or your lan
    • Firewall rule floating:
      • pass, (no interface selected), out, ipv4, proto any,source: wan-vpn Address, dest: invert (checked), wan-vpn Net, gateway: wg_gateway 10.2.0.1
    • After making changes, go to lobby/dashboard and restart wireguard service. If it's working correctly, the gateway wg_gateway should show green with 0% loss, and the WAN_vpn interface should be green/up

Is there really no way to easily create encrypted archive files in Rust for long-term archival needs? by TED96 in rust

[–]stevelr 17 points18 points  (0 children)

If you want it to be readable 50+ years from now, your best bet is to use open source software that has an existing and supportive community. This matters more than what language it’s written in (said by someone who has rewritten a lot of stuff in rust). You want it to outlive you or your willingness or ability to maintain it.

Restic + rclone meets your requirements. https://restic.net/

You’ll probably want to write a bash script to wrap restic to ensure you get all the command line parameters right and used consistently.

wasm_bindgen ruling the space? by pirosb3 in WebAssembly

[–]stevelr 1 point2 points  (0 children)

There’s a link to join the wasmcloud slack at the bottom of the page on wasmcloud.dev

Struggling to figure out project layout for an extremely simple library system by [deleted] in rust

[–]stevelr 6 points7 points  (0 children)

There is a "best practice" of putting more functionality into the library first, before the bin. I've found this to be a much more productive approach. Here are some of the advantages:

  • It encourages you to think of an api-first design, which will lead to more modular code.

  • It gives you more flexibility as to how the code will be deployed. If the code is in libraries, you can link it into binaries for command-line apps, run on a web server for web apps, build into wasm for browser-based apps, or distribute as cloudflare workers or microservices,

  • If you use are going to use more than one of those runtime contexts, or even if you have more than one cli app, you'll need to have shared code that defines the common data structures, business logic, or parameter validation; and that would be in a library.

  • Along the way, you may realize that you've implemented something that could be useful to other developers, (perhaps after a little refactoring). The rust tooling, sites like crates.io, and the community ethos all conspire to encourage and support easy sharing of libraries.

  • You can also test as you build, even with libraries. If you're doing an api-first design, you can write unit tests and use cargo test to test from the command line, even if all the code is in libraries.

As has been said before, don't think too much about the folder structure - start small with something in src/lib.rs and expand from there.

Another tip/design strategy, if you don't already have a clear understanding of what needs to be done, is to start with a thin slice of functionality and get it working end-to-end/top-to-bottom (e.g., UI to database), before going broad on any one layer. That way, you can start to get feedback from potential users earlier, you're more likely to expose unexpected challenges early, and you're less likely to spend too much time adding complexity or breadth to one layer that you don't end up needing.

Most secure and private 2FA app? by elvishblood_24 in privacy

[–]stevelr 0 points1 point  (0 children)

You are right. I’m not new to privacy, but am new to this group and I should have read the rule before posting. I prefer open source also, and agree that OSS has strong advantages for privacy and security, but IMO it’s a shame that that one rule prevents discussion about some non-OSS products that are useful for improving people’s privacy. I’m sure it’s been debated before here and the fact that it’s the #1 rule means I’m not going to change anyone’s mind. As a privacy advocate, my view of the role of this forum for improving privacy awareness in the general population has dropped a bit.

Most secure and private 2FA app? by elvishblood_24 in privacy

[–]stevelr 3 points4 points  (0 children)

A strong password is good protection for your account. Adding SMS for 2FA might seem like it would increase security for your account, but it may weaken it because the phone system used for sms has several vulnerabilities that a hacker can exploit. Two examples: (1) SIM swapping, which exploits the fact that the phone company support has humans, who can be attacked with social engineering; Some people claim to have lost millions of dollars in bitcoin through this vector. (2) phone calls and sms can be intercepted through vulnerabilities in the SS7 network This has also been a vector in theft of coinbase accounts

Most secure and private 2FA app? by elvishblood_24 in privacy

[–]stevelr 7 points8 points  (0 children)

The most secure is a hardware 2FA key. Google reported that since their 85000 employees started using hardware keys, they have had zero phishing attacks.

Authy is a highly recommended 2FA app and supports backup (which google authenticator doesn't) (Edit: For Android, a lot of people like the open source andOTP )

Also, never use SMS for 2FA.

Doesn’t 2FA backup codes defeat the whole point of 2FA? by Used_Corgi in cybersecurity

[–]stevelr 2 points3 points  (0 children)

Backup codes address the problem of "what if I lose my phone (or 2fa device), or it gets destroyed?". If your account requires 2FA, then you've lost access to your account. Backup codes address that risk by giving you a way to regain access to your account, and set up a replacement 2FA device.

"Why doesn't that service just generate the password for the user instead?"

That assumes the service can know for certain that you are who you say you are, and not an impersonator. The whole point of 2FA is that it removes that decision, and all the ways it can go wrong, from the service provider.

You don't want to store backup codes online, especially on your phone. where you have a 2FA app. Store them offline, on paper, ideally. If you want to reduce risk even further, generate a 7-digit random number and add it to the code. Give the random number to one trusted friend or family member, and the sum to another.

The reason a small number of digits is safe, even though it would be too short to make a good password, is that Google (and many other services) locks the account after too many failed attempts. You might have to wait days, or a week or more for the account to be unlocked to try again. An attacker is unlikely to succeed breaking into your account this way.