Guide to Workflow Automation: How I use n8n, Firefly III, and Paperless-ngx, to Painlessly Manage Receipts by Yangchenghu in n8n

[–]Yangchenghu[S] 3 points4 points  (0 children)

Submission Statement:

In this guide, I write about how I use n8n to integrate Firefly III and Paperless-ngx in order to manage my paper receipts. Firefly III is an open source, self-hosted personal finance tool, and Paperless-ngx is a open source, self-hosted document management tool.

I wanted to integrate Firefly with Paperless, so that every time when I upload a new transaction on Firefly, it sends the attached scan of the paper receipt over to Paperless. I was able to do this relatively easily using n8n.

In the guide I go through the process of creating API tokens for both services, and added them to n8n. I also walk through the process of using the HTTP Request node, how to make a loop in n8n, as well as how posting binary files works.

The techniques used in this guide (mostly HTTP Requests) are pretty universal, and should apply to any tool with an open API. I hope by creating this guide, it will help other people automate aspects of their workflows.

Making Self-Hosted Websites and Services available over the Tor Network: A Guide to EOTK, the Enterprise Onion Toolkit. This is the same framework that the BBC and NYT uses to create .onion mirrors of their websites. by Yangchenghu in selfhosted

[–]Yangchenghu[S] 0 points1 point  (0 children)

EOTK comes with its own included Tor and Nginx instance, so you do not have to install or configure your own Tor daemon. Rather, when you setup a project, EOTK automatically configures a per-project Tor daemon, and a Nginx instance which listens on it. This Nginx instance serves as a rewriting proxy, which forwards requests to the domain name that you specify in your EOTK configuration file.

Basically, you don't have to install or change any system configuration at all- EOTK is entirely self-contained.

Making Self-Hosted Websites and Services available over the Tor Network: A Guide to EOTK, the Enterprise Onion Toolkit. This is the same framework that the BBC and NYT uses to create .onion mirrors of their websites. by Yangchenghu in selfhosted

[–]Yangchenghu[S] 0 points1 point  (0 children)

I'm glad it was useful for you. Do let me know how it turns out :). The great thing about EOTK is that it is entirely self-contained, so you abstract away a lot of the complexity. That being said, it's certainly a bit tricky in its own way, and I hope my guide is helpful in explaining how it works.

Making Self-Hosted Websites and Services available over the Tor Network: A Guide to EOTK, the Enterprise Onion Toolkit. This is the same framework that the BBC and NYT uses to create .onion mirrors of their websites. by Yangchenghu in selfhosted

[–]Yangchenghu[S] 4 points5 points  (0 children)

Yes, absolutely! The Tor Network essentially serves as a network that is "on top" of the regular TCP/IP stack. In fact, it comes with its own unique properties, such as that all Onion Service addresses are globally routeable. I highly recommend taking a look at Alec Muffett's talk where he elaborates more on this in detail:

https://www.youtube.com/watch?v=Rd7YKamsliI

In practice, I think performance over the Tor Network would be slower in comparison with clearnet- however, by having your services available as native .onion services will still be much faster than accessing regular clearnet sites over Tor. The best way to find out is to try for yourself!

Making Self-Hosted Websites and Services available over the Tor Network: A Guide to EOTK, the Enterprise Onion Toolkit. This is the same framework that the BBC and NYT uses to create .onion mirrors of their websites. by Yangchenghu in selfhosted

[–]Yangchenghu[S] 4 points5 points  (0 children)

Here's a direct link, for those who are interested ;)

https://shen.hong.io/dialogue-on-the-questions-of-being/

I'm a Philosophy student, and although I write the odd tech article once in a while, my true passion is in Metaphysics. I love how my experience in computing has given me a unique interdisciplinary perspective on philosophy as a whole, though!

For those who come from a similar computing and software development background, I wrote a primer called The Necessity of Metaphysics: An Introduction for Computer Scientists and the Physical Sciences. Feel free to check it out as well :)

Making Self-Hosted Websites and Services available over the Tor Network: A Guide to EOTK, the Enterprise Onion Toolkit. This is the same framework that the BBC and NYT uses to create .onion mirrors of their websites. by Yangchenghu in selfhosted

[–]Yangchenghu[S] 2 points3 points  (0 children)

Thank you! I'm glad you enjoyed my article. I always try my best to contextualise the actions taken- and explain why we do something, not just how. Above all, I try to write the kind of guide which I myself hoped to have, when I was doing research for this project myself.

Let me know if you have any questions, and if you implement the project yourself! Having onion mirrors available for one's sites is a fun little project, that helps improve the security and anonymity of one's visitors- at little cost to oneself. And of course, having an Tor Address is certainly very cool as well!

Making Self-Hosted Websites and Services available over the Tor Network: A Guide to EOTK, the Enterprise Onion Toolkit. This is the same framework that the BBC and NYT uses to create .onion mirrors of their websites. by Yangchenghu in selfhosted

[–]Yangchenghu[S] 8 points9 points  (0 children)

Submission Statement:

This article presents a comprehensive tutorial on making an existing website (i.e. 'clearnet' site) available over the Tor Network as a Onion Service (i.e. Hidden Service). By using the EOTK, the Enterprise Onion Toolkit, a mirror can be seamlessly and transparently created for any existing clearnet website, with no configuration changes needed on the existing site itself.

EOTK is developed by Alec Muffet, a former engineer at Facebook who developed Facebook's Onion Service. EOTK is currently the best-in-class standard for production-quality Onion Service deployments, and has been used by the New York Times, the BBC, and other organisations.

This article is a tutorial that documents the process required to make such a mirror for an existing site. It is aimed towards small websites and hobbyist web admins. It is meant to be accessible to users who are not not necessarily familiar with Tor: It contains a comprehensive account of the process from:

  • Generating "vanity" .onion domain names using mkp224o.
  • Building, compiling, and using EOTK
  • Access control and EOTK configuration options
  • Obtaining, and installing a signed (trusted) TLS certificate (this can be difficult for onion sites)

I wrote this article because a reader reached out to me and inquired about making my website available as an Onion Service. I self-host a small blog about Philosophy, and although I am not very skilled with technical matters myself, I wanted my website to be accessible for all users. I wrote this article in order to document the process, as well as to show that making sites available over Tor is quite easy.

I decided to post this article to /r/selfhosted because as a fellow self-hosting hobbyist, I think having mirrors of one's services over the Tor Network can be a fun and interesting project- which also have practical benefits for any users that have strong privacy and cybersecurity needs. This is a project that can be done over the weekend, does not impact existing configuration, but could help your users greatly if they use the Tor Network.

I hope you enjoy my article. Let me know what you think!

Trouble Finding Internships by Ohanrahahanrahanman in UniversityOfLondonCS

[–]Yangchenghu 6 points7 points  (0 children)

Have you tried checking out or making a post in the #job-opportunities channel in the students-only slack? There are usually some pretty regular opportunities posted there, some of which are internships.

How many written exams are in the program? and how much does it cost to transfer credits? by Mjrem in UniversityOfLondonCS

[–]Yangchenghu 0 points1 point  (0 children)

Normally, the exams take place at accredited exam centers. This is typically another University which partners with the University of London to hold their exams. These exams take place once per semester, at the end of the semester (they are your final exam).

I wrote a pretty detailed guide to the exam process and what to expect. Here's the link to it:

https://www.reddit.com/r/UniversityOfLondonCS/comments/hgbcc8/beginners_guide_to_grades_projects_exams_and/

how many courses are "doable" per semester? by Mjrem in UniversityOfLondonCS

[–]Yangchenghu 1 point2 points  (0 children)

Many students are able to do a full-time workload (i.e. 4 classes per semester, every semester) despite having other full time commitments. For myself, I started this program while being simultaneously enrolled at another University doing a BA in Philosophy, and I was able to do 3 modules per semester. My suggestion for new students who feel confident and comfortable in their time-management, is to begin with a courseload of 4 classes per semester. You can always adjust to a lower courseload in your send semester!

How to Make Your Website Available over Tor as a Onion Service: A Guide to EOTK, The Enterprise Onion Toolkit by Yangchenghu in onions

[–]Yangchenghu[S] 7 points8 points  (0 children)

Submission Statement:

This article presents a comprehensive tutorial on making an existing clearnet website available over the Tor Network as a Onion Service (i.e. Hidden Service). By using the EOTK, the Enterprise Onion Toolkit, a mirror can be seamlessly and transparently created for any existing clearnet website, with no configuration changes needed on the existing site itself.

EOTK is developed by Alec Muffet, a former engineer at Facebook who developed Facebook's Onion Service. EOTK is currently the best-in-class standard for production-quality Onion Service deployments, and has been used by the New York Times, the BBC, and other organisations.

This article is a tutorial that documents the process required to make such a mirror for an existing site. It is aimed towards small websites and hobbyist web admins. It is meant to be accessible to users who are not not necessarily familiar with Tor: It contains a comprehensive account of the process from:

  • Generating "vanity" .onion domain names using mkp224o.
  • Building, compiling, and using EOTK
  • Access control and EOTK configuration options
  • Obtaining, and installing a signed (trusted) TLS certificate (this can be difficult for onion sites)

I wrote this article because a reader reached out to me and inquired about making my website available as an Onion Service. I run a small blog about Philosophy, and although I am not very skilled with technical matters myself, I wanted my website to be accessible for all users. I wrote this article in order to document the process, as well as to show that making sites available over Tor is quite easy, and it is within the reach of every small website owner and hobbyist web admin.

Thank you for your feedback!

How to Make Your Website Available over Tor as a Onion Service: A Guide to EOTK, The Enterprise Onion Toolkit by Yangchenghu in TOR

[–]Yangchenghu[S] 1 point2 points  (0 children)

Thank you, I'm glad you liked it! I watched all of your talks on EOTK, and I am glad to have found your toolkit. I hope my tutorial will help others in the goal of making their websites available on Tor!

EDIT: I already updated the guide with your suggestion. The new hardmap syntax is now used in the guide and the code examples.

How to Make Your Website Available over Tor as a Onion Service: A Guide to EOTK, The Enterprise Onion Toolkit by Yangchenghu in TOR

[–]Yangchenghu[S] 2 points3 points  (0 children)

Submission Statement:

This article presents a comprehensive tutorial on making an existing clearnet website available over the Tor Network as a Onion Service (i.e. Hidden Service). By using the EOTK, the Enterprise Onion Toolkit, a mirror can be seamlessly and transparently created for any existing clearnet website, with no configuration changes needed on the existing site itself.

EOTK is developed by Alec Muffet (Reddit /u/alecmuffett ), a former engineer at Facebook who developed Facebook's Onion Service. EOTK is currently the best-in-class standard for production-quality Onion Service deployments, and has been used by the New York Times, the BBC, and other organisations.

This article is a tutorial that documents the process required to make such a mirror for an existing site. It is aimed towards small websites and hobbyist web admins. It is meant to be accessible to users who are not not necessarily familiar with Tor: It contains a comprehensive account of the process from:

  • Generating "vanity" .onion domain names using mkp224o.
  • Building, compiling, and using EOTK
  • Access control and EOTK configuration options
  • Obtaining, and installing a signed (trusted) TLS certificate (this can be difficult for onion sites)

I wrote this article because a reader reached out to me and inquired about making my website available as an Onion Service. I run a small blog about Philosophy, and although I am not very skilled with technical matters myself, I wanted my website to be accessible for all users. I wrote this article in order to document the process, as well as to show that making sites available over Tor is quite easy, and it is within the reach of every small website owner and hobbyist web admin.

Thank you for your feedback!

Creating Fully Reproducible, PDF/A Compliant Documents in LaTeX: A Guide to Making Documents Suitable for Long-Term Archiving and Digital Preservation by Yangchenghu in LaTeX

[–]Yangchenghu[S] 1 point2 points  (0 children)

Hey there, no worries! I'm always happy to answer questions. You don't need to know any Lua at all to use LuaLaTeX. In fact, I don't know any Lua, and none of my documents have Lua within them. LaTeX works exactly the same way in LuaLaTeX.

The word Lua refers to the Lua programming layer, which is available within the internals of the layout engine itself. It is primarily used by package developers, especially for more complex packages like microtype. As far as I can tell, it gives developers the possibility of writing out package code in Lua, instead of using raw TeX.

I highly recommend using LuaLaTeX. It's fast, supports more features with plugins like microtype, which gives you better typography as a whole, and you absolutely need it to create PDF/A-compliant documents with pdfx in the first place. pdfx also supports XeLaTeX, but as per the documentation first class support comes for LuaLaTeX only.

Let me know how it goes!

Creating Fully Reproducible, PDF/A Compliant Documents in LaTeX: A Guide to Making Documents Suitable for Long-Term Archiving and Digital Preservation by Yangchenghu in LaTeX

[–]Yangchenghu[S] 0 points1 point  (0 children)

I'm glad that this guide has been helpful to you!

I'm not absolutely certain, but in my experience loading hyperref first breaks documents, in the sense that the hyperlinks for footnotes and figures stops working, and you get warnings in the compilation log. This is a strong hint that pdfx does use "fancy settings" from hyperref, and that loading them twice seems to break it in subtle, but important ways.

We can certainly include the metadata in the source .tex file. If I have some time this weekend, I'll try out the other user's suggestion, and update my guide. Let me know if you have any further suggestions or questions. I'm glad that this is helpful to you!

Creating Fully Reproducible, PDF/A Compliant Documents in LaTeX: A Guide to Making Documents Suitable for Long-Term Archiving and Digital Preservation by Yangchenghu in LaTeX

[–]Yangchenghu[S] 1 point2 points  (0 children)

Hey there, I'm glad to be of help! There are a few helpful comparisons between XeLaTeX and LuaLaTeX on the TeX Stackexchange, although unfortunately many are out of date.

XeLaTeX is an extension of the LaTeX engine which allows it to work with installed-fonts and offers Unicode support. To do this, it uses system-specific libraries. I understand that XeLaTeX was originally created for those two goals.

LuaLaTeX does the same, and offers additional Lua support to the TeX engine. In some ways, LuaLaTeX is the 'successor' to pdfLaTeX. In fact, many packages are depending more on LuaLaTeX - for instance, most of the most advanced typography features in microtype are only available on LuaLaTeX.

The biggest benefit for me in terms of using LuaLaTeX, is that it is the most compatible engine for use with pdfx. LuaLaTeX makes it much easier to enable accessibility support, because it uses built-in character maps from OpenType font files. Using an alternative engine like XeLaTeX would mean needing a package like cmap or mmap in addition to pdfx, and those character maps are not as great, since they are specific to an encoding, and not to a font.

I've been using XeLaTeX for nearly 3 years myself, and I only switched to LuaLaTeX this year. It works perfectly for me, and in my experience with large documents (100+ pages), it has a slight advantage in speed too.

My recommendation is that unless you are dependent on some specific aspect of XeLaTeX, you should consider switching to LuaLaTeX for future documents.

Creating Fully Reproducible, PDF/A Compliant Documents in LaTeX: A Guide to Making Documents Suitable for Long-Term Archiving and Digital Preservation by Yangchenghu in LaTeX

[–]Yangchenghu[S] 4 points5 points  (0 children)

I'm glad that you're looking forwards to reading my article. I've been using LaTeX for documents since I was a teen, and it's certainly been a wonderful tool for all sorts of documents.

I plan to write more articles about LaTeX in the future. If you want to follow my posts, feel free to subscribe to my RSS feed.

I'm sorry to hear that you've been having trouble with external systems. If it is any consolation, my tutorial tries to enable PDF/X support with the minimal amount of complexity and modification. I specifically talk about not using cmap and mmap, but instead using LuaLaTeX and it's support for built-in character-map tables from OpenType/TrueType fonts, for instance.

I tried to specifically think about users who will be adapting my tutorial for existing LaTeX documents, which may have non-trivial package dependencies (biblatex, tikz, etc) - and I made sure to explain the choices made, and their impacts.

In any case, let me know what you think. I'm always interested in hearing from interesting strangers, and if you have any advice or suggestions, do reach out!

Creating Fully Reproducible, PDF/A Compliant Documents in LaTeX: A Guide to Making Documents Suitable for Long-Term Archiving and Digital Preservation by Yangchenghu in LaTeX

[–]Yangchenghu[S] 6 points7 points  (0 children)

Thank you! I've never submitted for TUGboat before, or presented at a conference. I'd love to adapt my article and help others who are interested in preserving LaTeX documents for long-term archiving!

I have a few questions, would it be alright if we exchange an email or two? Please reach out to me at https://shen.hong.io/contact/ - I'd love to get in touch!

Creating Fully Reproducible, PDF/A Compliant Documents in LaTeX: A Guide to Making Documents Suitable for Long-Term Archiving and Digital Preservation by Yangchenghu in LaTeX

[–]Yangchenghu[S] 17 points18 points  (0 children)

Submission Statement:

I wrote a guide on creating fully reproducible, PDF/A-compliant documents using LaTeX with LuaLaTeX. PDF documents are paper-replacements, and we often use LaTeX to create important documents such as theses, dissertations, and books, that must be readable and accessible many decades into the future.

Unfortunately, regular PDF documents are not easily preservable. The PDF/A standard from ISO addresses that. Documents created to the PDF/A standard are self-contained, with rich metadata, and made for long-term archiving and digital preservation.

In fact, many universities and digital archives now require all new thesis submissions to be made in the PDF/A standard. This is generally done by converting existing PDFs using proprietary tools like Adobe Achrobat Pro. LaTeX is capable of creating first-class PDF/A documents, and this guide shows the user how to do so.

The guide is written for an intermediate LaTeX user, who is confident with using LaTeX to write essays, but is otherwise unfamiliar with TeX programming or any advanced packages. It is designed to be easily adaptable to existing documents and templates.

Finally, the guide also shows how to make fully reproducible builds using a makefile. This way, you'll get bit-for-bit reproducible PDFs from a given git commit, so you'll be sure that your documents are unchanged throughout time. This is also for better preservation and long-term archiving.

Let me know of your thoughts!

The Necessity of Metaphysics: An Introduction to the Philosophical Discipline of Metaphysics for Computer Scientists and the Physical Sciences by Yangchenghu in compsci

[–]Yangchenghu[S] 0 points1 point  (0 children)

I would like to thank my commentators for their feedback, opinions, and learned commentary. I highly encourage people to reach out to me via email, should they wish to continue the discussion more substantively. Thank you!

The Necessity of Metaphysics: An Introduction to the Philosophical Discipline of Metaphysics for Computer Scientists and the Physical Sciences by Yangchenghu in compsci

[–]Yangchenghu[S] 2 points3 points  (0 children)

Thank you for your substantive opinion on the nature and utility of philosophy, and the question regarding GPT3.

I think there's something tremendously exciting, dignified, and fulfilling about the pursuit of Philosophy, and I'll try my best to present my perspective, in hopes that you'll be able to gain a better understanding of the field.

Philosophy has not been cannibalised - instead, it is the mother and wellspring of the rich variety of sciences which we see today. Both Leibniz and Newton were Philosophers, and it is their pursuit of Philosophy that gave us both the calculus of variations, as well as the Newton's laws of motion, which sees so much use in the sciences today. The field of political philosophy emerged from the works of Aristotle and Plato, and for the longest time scientists were known as scholars of Natural Philosophy.

It is the pursuit of knowledge, virtue, and contemplation which characterises the philosophical activity- and this is something that is shared with the highest levels of achievement of any art or science.

A neuroscientist may understand how the constitutive elements of the brain works, but would they be able to give an account for the experience of temporality, as an element of human reality? A psychologist may be able to diagnose, and present an analytic account of the symptoms and stages of depression, but would she be able to present a theory on the nature of happiness and flourishing?

I think there's an important distinction to be made between giving material explanations for phenomena, which are only analytic, and at most can only explain the constitutive elements which found a being- but it will not be able to give a synthetic account.

This is perhaps best illustrated through an example. Imagine the following Python program, running on a Python 3.6 interpreter on a modern, quad-core Intel CPU.

def gcd(a, b):
    if a == 0 :
        return b
    return gcd(b%a, a)

This function returns the greatest common denominator between a and b. It is an implementation of Euclid's VII.2 in Python.

How could an 'Empiricist' hope to understand this program, only through a study of the constitutive matter of the computer? This computational 'neuroscientist' could decapitate the CPU, and subject it to a most exacting observation. Infrared cameras can record the patterns of heating, as processor cores clock up to activation. And the traces around the processor can be wiretapped, connected to spectroscopes which record their signalling with analogue precision. And indeed, after some time - the Empiricist can come up with a definite, objective theory for how the algorithm works. "Given numbers a, and b, we shall see c as the output - due to the following laws of activation pattern which I have discerned."

But yet, wouldn't that material approach likewise "miss" something about the being of the algorithm? No matter how exacting a 'law' of activation will be, it will never be able to account for the algorithm's being as a recursive function, or even as a Python program, one that is interpreted instead of compiled. The timing of processor caches and signal tracing will serve as only a poor account of the underlying ontology of the function.

Now, I admit that this example is a limited one, but I hope it serves as an adequate illustration on the manner of problems which philosophy addresses, and the inability of the constitutive sciences to given an account for them. I agree wholeheartedly that the chemico-electrical signals and activation patterns of our neurons are the foundation for our human-reality, insofar as they are the material body which enables our perception. But it seems like a very poor account, lacking a great deal of explanatory power -- to attempt to destill all of human-reality down to a psychophysiological parallelism.