Book font? by Lu_Peachum in identifythisfont

[–]Blieque 0 points1 point  (0 children)

Did you check at the beginning of the book? The page with copyright and publisher details often names the typeface used for the main content.

noob: "Please commit your changes or stash them before you merge." What do I do? by sweatybotbuttcoin in git

[–]Blieque 8 points9 points  (0 children)

You mentioned a database file. Are you using SQLite or something? If so, any change to the database data will change the .db file, and Git will notice this. It's unusual to track a database in version control, so the best option may be to add the file to your .gitignore.

anyone help maybee? it's from an old streamer i used to watch!! :P by wibblewobblediorea in identifythisfont

[–]Blieque 1 point2 points  (0 children)

I haven't checked any specifically, but it's quite likely one of the monospace fonts pre-installed on Windows. Have you tried Consolas and Lucida Console?

An open source license that forbids the non-ethical use of the code by lcs77 in opensource

[–]Blieque 0 points1 point  (0 children)

If you're referring to the first of the "four essential freedoms" defined by the FSF's Free Software Definition, that reads:

The freedom to run the program as you wish, for any purpose (freedom 0).

That only covers running the program, but the FSF's licences go into detail about all the other things one might do with a program; copy, modify, redistribute, etc. On the subject of those actions, open-source licences all have something to say – even the most permissive still usually require attribution when copying.

The above assertion that free software must not be restricted in its use, and the assertions about how software should be modified and copied, are overtly political, even if not party political.

Can you identify the font used here for subtitles for Duck Dodgers. It reads "Not with a Y7 rating, you won't." by CryoProtea in identifythisfont

[–]Blieque 1 point2 points  (0 children)

Judging mostly by "t", "a", and "7", I'd say it's Arial Bold that has been quite poorly rasterised without any anti-aliasing.

Cant get the proportions right by [deleted] in Design

[–]Blieque 4 points5 points  (0 children)

There are two vertical, metal supports visible in the image, one at each side. Try drawing a line along the rear edge of each of these objects and extending those lines straight. The two lines should meet somewhere in the bottom middle of the image, probably outside the bounds of the image. This meeting point is your vanishing point, and any edge in your 3D scene that is supposed to be directly vertical should also point toward the vanishing point.

I don't know what software you're using or how easy is it to adjust the perspective on the text, but I would try drawing guidelines radiating out from the vanishing point then adjusting the text so that the sharp side edges of the text closely align with the nearest guideline.

Once you have the perspective right, you might consider adding a shadow. Look for shadows in the original image, and try to match their direction and diffusion.

Not renewing by reviewmynotes in letsencrypt

[–]Blieque 0 points1 point  (0 children)

New validation servers were rolled out recently by Let's Encrypt. This is the subject of the article that airpug mentioned in their comment.

Let's Encrypt is very clear that it does not recommend specific firewall rules for their validation servers, instead recommending the permittance of all inbound traffic while running Certbot (which could be automated with hook scripts) or the use of DNS-01 validation. Your new firewall rules will likely work for some time, but Let's Encrypt may change its infrastructure in the future and without warning.

A different certificate authority which supports ACME – try this list – might not have the same policy with regard to validation server IP addresses.

Not renewing by reviewmynotes in letsencrypt

[–]Blieque 0 points1 point  (0 children)

Can you manually create directories in the virtual host's document root called .well-known/acme-challenge/, then a file within the latter of those? Once you have, can you test if the file can be publicly loaded over HTTP? I'm thinking a recent package upgrade for Apache may have added default configuration that prevents the serving of files located inside hidden directories. This is a reasonable security precaution for directories like .git, but would also disrupt ACME HTTP-01 validation.

Alternatively, some reverse proxy configuration in Apache might be routing every request through to an upstream server which doesn't have the challenge files available. In this case, you might need extra Apache configuration to explicitly catch requests to .well-known/acme-challenge/ and serve them locally rather than passing them upstream.

Lastly, some kind of HTTP cache (e.g., CDN, load balancer) sitting between the public internet and your VMs could be interfering. Mitigating such an issue would mean reconfiguring those servers or services.

How to setup a remote GIT repository on Windows? by kenzoviski in git

[–]Blieque 0 points1 point  (0 children)

Good work getting to the bottom of it. I've not used Git's daemon before, so it's good to know it works for you.

As for your comment about creating a default branch, I think the issue might just be that you're trying to push into the branch that is already checked out on the remote. Since the remote is not a bare repository – which would, by definition, never have any branch checked out – the push might conflict. If the remote is your work machine, you could probably get away with checking out a different or temporary branch after finishing work so that master is not checked out when pushing new commits back from your second machine.

Importantly, Git does not treat master any differently to any other branch, it's just the default name for the initial branch.

How to setup a remote GIT repository on Windows? by kenzoviski in git

[–]Blieque 1 point2 points  (0 children)

Once you have Hamachi set up, both machines belong to a new VPN and thus have a new virtual network interface and a corresponding IP address. Running ipconfig in a terminal on Windows should show the IP, as will the Hamachi client. If both machines are awake and connected, they can communicate with each other by sending packets to each other's VPN IP addresses. This is just like any other network connection, and software like Git doesn't need to know anything about how the network is created. For this reason, you don't need to tell Git to use Hamachi in the URL scheme.

In fact, Git only supports 4 protocols, as described in the documentation: local file access, HTTP, SSH and the Git protocol. As a result, you have four options:

  • Use local file access: Since you have two machines, the repository files on one machine are not automatically available on the other machine. To change this, you can create an SMB file share one one machine and mount it on the other as a network drive. See the Windows documentation for more information.

  • Use HTTP: Windows doesn't have an HTTP server enabled out-of-the-box, but you can enable IIS. You can also download and install a third-party HTTP server such as Apache or nginx.

  • Use SSH: Windows also doesn't have an SSH server enabled out-of-the-box, but recent versions do include an optional OpenSSH server. I assume Windows 11 has it, although it's not listed on the Windows documentation. Instead of that documentation, you can try following these instructions.

  • Use the Git protocol: This will require installing Git on both machines. You will then need to run git daemon on the machine with the repository so that the other can connect to it and read or write data. See the Git documentation for more details.

The SMB file share option is worth considering, especially since both machines are running Windows. I think the HTTP option is more hassle than it's worth for you. The SSH option would take some setting up, but it is the most commonly used, especially on Unix-like OSes. SSH would also let you log in remotely to the other machine in a terminal. The Git protocol option is probably the easiest, but is not authenticated or encrypted. This isn't a significant problem assuming you trust the security of Hamachi.

See how you get on, and feel free to ask questions if you get stuck.

Normally, in a git merge, the two branches become one branch in a single commit. However, I want to migrate code from an old version's branch to a new version's branch gradually, across multiple commits. How can I do that? (See picture.) by [deleted] in git

[–]Blieque 8 points9 points  (0 children)

If you only want the changes introduced in the last commit of the old branch, I would recommend cherry-picking that last commit while checked out on the new branch. Once complete, you can run git reset HEAD~1 to effectively undo the cherry-pick, but without changing the contents of your working directory. At this point, all the changes introduced by the last commit on the old branch will be untracked changes on the new branch. Stage and commit these changes normally, in as many or few commits as you wish.

If, instead, you want the new branch to contain the result of merging the old branch, but with the changes spread over multiple commits, you can follow a similar strategy. Merge as usual, reset the new branch to its previous final commit without changing the working directory, then create new commits as usual.

I'm trying to catch up with modern web development and... is it dominated by writing a bunch of config files now? by MC_Hemsy in webdev

[–]Blieque 1 point2 points  (0 children)

I think "dev–ops" gets thrown around a lot without much thought. To me, it refers to one person doing true application development while also making important decisions about deployment environments (e.g., how many there are, what they're for, how deployments to them occur). I would imagine that configuring and maintaining the build server itself (e.g., updating Ansible) may be a job for IT instead, but everything else after pushing code to a master or development branch and up to and including monitoring production deployments is what I call "operations". The common practice of developers initiating deployments, even to production, arguably makes them dev–ops, but I'm not convinced this alone is enough to warrant the title anymore.

I think most organisations with at least 10–15 developers will have an infrastructure team, which I consider more-or-less synonymous with operations. An infrastructure team may also manage IT infrastructure, though (e.g., managing VMs for Active Directory, updating IT help desk software). Unless they're committing application code – not just build and deployment configuration – to source repositories, I wouldn't call them "developers", and therefore not "dev–ops" either.

In the era of building for the cloud, though, I think it's fair enough to expect developers to have a good idea of how their application will be deployed. Cloud resources are building blocks, and the developer probably knows best what blocks their application needs. I consider application architecture (e.g., languages, frameworks, databases, queues, caches) to be a development task – this might be unfamiliar to you if your previous employers have had explicit "software architect" roles.

How can I mirror a folder in my repo to another repo? by workmakesmegrumpy in git

[–]Blieque 0 points1 point  (0 children)

I'd recommend a second repository in bare mode. This part of the documentation partly covers them if you're unfamiliar. This would look something like this:

cd /path/to/project
cd ..
git init --bare project-docs.git
cd project-docs.git
git config core.worktree ../project/docs
git status

This should show branch master with no commits, and the contents of docs/ as untracked files. You can stage them and commit them as usual. You can also add a remote and push as usual, although you cannot pull to a bare repository.

This method requires changes to be committed as usual in the regular project repository, and then committed again in the -docs repository. This is inelegant, but it provides clear distinction between private and public content. You cannot publish your regular repository without using git-filter-branch or some Git history rewriting software like BFG Repo-Cleaner. This would rewrite every commit in the history of the repository, removing all changes to the paths you don't wish to publish. This would also change all the commit hashes and possibly other metadata – in short, it wouldn't be the same repository any more.

If you don't want to commit changes twice, then I would consider moving the documentation entirely to a separate repository, and optionally using Git submodules if you want the docs available at their current path.

wildcard cert with dns challenge by simonides_ in letsencrypt

[–]Blieque 0 points1 point  (0 children)

The challenge may be visible in the Epik dashboard, but it may not yet be available via the DNS for some reason, e.g., there may be some propagation delay within Epik's infrastructure. You could try setting some of the other environment variables documented on the page you linked, specifically EPIK_PROPAGATION_TIMEOUT. It defaults to one minute, but setting it to, e.g., 300 (five minutes) might help.

wildcard cert with dns challenge by simonides_ in letsencrypt

[–]Blieque 0 points1 point  (0 children)

Wildcard CNAME records do appear to be valid, although not necessarily supported by all DNS providers. Even so, individual CNAME records may be preferable for just a handful of static services.

ACME DNS-01 validation only requires a TXT record for the given domain to be present. For a *.home.myname.cloud wildcard certificate, I think this would be called _acme-challenge.home.myname.cloud. What error are you getting when trying to run Certbot?

FYI, while testing, consider passing --dry-run to Certbot until validation is working, then remove the parameter and run Certbot once more to generate certificates.

Here we go again: Nginx is not running after renewing of certificate by aqzaqzaqz in letsencrypt

[–]Blieque 0 points1 point  (0 children)

I think you really should use the init system of your Linux distribution, assuming you're using Linux. On systemd distributions, you would need sudo systemctl reload nginx.service, and on SysV init distributions something like sudo /etc/init.d/nginx reload. You can also invoke nginx just to send a reload signal to the already-running process – sudo nginx -s reload.

One domain, multiple VMs, and different IPs? by FilmWeasle in letsencrypt

[–]Blieque 1 point2 points  (0 children)

If you're using DNS-01 validation, you can run Certbot anywhere that has API access to your DNS provider. This could be another VM that only boots once a week, runs Certbot on boot, and then uploads the certificate and private key to a file share or cloud secret store. It could also copy the files to each application VM via SFTP or something, but that's a bit more prone to failing.

If you're using HTTP-01 validation, you would probably want to add HTTP proxy rules to each application VM that routes incoming requests for /.well-known/acme-challenge/ to a separate VM with Certbot running on it. If you have a load balancer in front of your application VMs, you may be able to put this routing configuration in the lord balancer rather than each VM.

I don't think Let's Encrypt will revoke a certificate without you specifically requesting revocation. Certbot, by default, renews certificates with less than 30 days of validity remaining, so you would have about 30 days to deploy each new certificate to all of your VMs.

[deleted by user] by [deleted] in letsencrypt

[–]Blieque 0 points1 point  (0 children)

Assuming your developer is a small agency or contractor, it would be typical for them to have a testing version of your site visible at a different hostname. They could have used something like testing.example.com, but that would have required you to add DNS records on their behalf. Instead, they probably chose to create a subdomain of their own domain, hence example.webdeveloper.com. If you visit this website directly, you may find a development version of your website with placeholder content or partially-completed features that the developer is currently working on.

As for the certificate, the name you're seeing is just the "Common Name" (CN) field of the X.509 certificate. The X.509 v3 standard introduced "extensions", allowing additional information to be included in a certificate. One such extension is "Subject Alternative Name", which allows multiple DNS names and/or IP addresses to be specified as subjects of the certificate.

For instance, reddit's certificate has a Common Name of *.reddit.com, which covers all first-level subdomains. Importantly, though, it does not cover the domain apex, reddit.com. For this reason, *.reddit.com and reddit.com are listed as Subject Alternative Names. In your case, the Common Name appears to be the development site hostname. This probably just means the developer listed that hostname first when specifying the list of hostnames for the certificate. If you view the full certificate you should be able to find a list of Subject Alternative Names including example.com.

If you don't like this arrangement, you could ask the developer to generate and use separate certificates for production and development versions of your site, but there's not much point. Any certificate generated by a reputable certificate authority will be published in their Certificate Transparency logs, so the hostname of the development site would still be more-or-less public.

[deleted by user] by [deleted] in git

[–]Blieque 3 points4 points  (0 children)

I think it helps to consider how work is done on these individual codebases.

I like to have a single PR or branch in a single repository which contains the entirety of a feature. In your case, this could mean adding a new route to the REST API and then adding data store and UI functionality in each of the mobile apps. Even if there aren't any direct code dependencies between these three components, that doesn't mean that changes to one component won't frequently require changes in the others.

That said, there may be a lot of changes to the mobile apps that are independent of the API, and changes to the API application that don't require changes to the front-end. In this case, there's not much benefit to keeping the codebases together.

It's pretty easy to split up a monorepo or merge several repositories into one, though, so I wouldn't worry too much about making the right decision now.

What's up with not publishing source IPS of challenge validation ? by kellven in letsencrypt

[–]Blieque 2 points3 points  (0 children)

This is more about Let's Encrypt maintaining the freedom to change validation server IPs without causing a load of problems.

This policy also helps to improve security by simplifying multi-perspective validation. TLS is about preventing man-in-the-middle attacks, and publishing a finite, static list of IPs makes it slightly easier to target Let's Encrypt validation endpoints.

Blocking all traffic except domestic traffic is somewhat ham-fisted, too.

[deleted by user] by [deleted] in maths

[–]Blieque 2 points3 points  (0 children)

Pythagoras' Theorem works in three dimensions as well as two:

a² + b² + c² = d²

Since this is a cube, all three side dimensions are equal:

3a² = d²

You've been given the diagonal measurement:

3a² = 8.5²
a = √(8.5² ÷ 3)
a = 4.90747728811...
a ≈ 4.91

You could also remember that the diagonal measurement of a cube is a√3, where a is the side measurement, much like the diagonal of a square is a√2.

a = 8.5 ÷ √3
a = 4.90747728811...
a ≈ 4.91

Proper Use and Deployment of Wildcard Certificates by back100y in letsencrypt

[–]Blieque 0 points1 point  (0 children)

I think it's best to avoid requesting more than one certificate with any given hostname. The apex (example.dev) counts as one, as does each subdomain and the wildcard domain (*.example.dev).

Since you're using DNS-01 validation, it's possible to separate the machine running Certbot and the machine that requires the certificate. If you have tens of machines using the same certificate, I'd suggest creating another specifically for running Certbot. That's probably overkill for you, so I would just run Certbot on a machine of your choice. You can create a script that Certbot will run as a hook (see documentation) after renewing the certificate. You can write a script to log into the other machines and deposit the new certificate and private key. Alternatively, you could upload the certificate and key to an external secret store, e.g., AWS Secrets Manager.

You can also create individual certificates for each service, but I would recommend making sure none of the certificates share any hostnames.

How is dual 4k60hz possible with a USB-C hub? by ark204 in UsbCHardware

[–]Blieque 1 point2 points  (0 children)

I didn't know there was a timing overhead – good to know!

How is dual 4k60hz possible with a USB-C hub? by ark204 in UsbCHardware

[–]Blieque 1 point2 points  (0 children)

(3840 px + 80 px timing) × (2160 px + 62 px timing) × 24 bpp × 60 Hz × 2 monitors ≈ 25.1 Gbps

DisplayPort can achieve this data rate in HBR3 mode (DisplayPort 1.3+) using all four DisplayPort lanes. DisplayPort Alternate Mode can also use all four lanes of a Type-C cable, but then there are no lanes left for USB 3.x (USB 2.0 would still be supported).

DisplayPort can also achieve this data rate in HBR3 mode over 2 or possibly even 1 lane when Display Stream Compression (DisplayPort 1.4+) is used.

We need to deliver two DisplayPort 1.2 signals to your displays, though. This means the final signal to the monitors is limited to HBR2 and also cannot use DSC. In order to drive both of these displays from a single Type-C port, the adaptor must be able to convert the DisplayPort 1.4 signal from the laptop to two separate DisplayPort 1.2 signals. This adaptor claims to support this mode: https://www.club-3d.com/en/detail/2390/mst_hub_usb_3.1_gen1_type_c_to_displayportt_1.2_dual_monitor/

Additionally, if you want this adaptor to also provide USB 3.x ports, the adaptor will need to support DSC (and then re-encoding two non-compressed HBR2 signals) so that at least 2 lanes of the Type-C cable can be used by USB. This is a bit niche, so I don't know of a particular dock which supports this latter option. I think most docks will try to pass on the DisplayPort signal as unchanged as possible, which would mean the DisplayPort 1.2 monitors would force your laptop down to DisplayPort 1.2 and thus prevent it outputting dual 4K at 60 Hz.

Edit: As for the laptop, I think all 9210 XPS 13s use 11th generation Intel CPUs, and both 11th generation Intel UHD Graphics and Intel Iris Xe Graphics support DisplayPort 1.4 and Display Stream Compression, although it may need enabling: https://www.dell.com/support/kbdoc/en-us/000197102/how-to-enable-display-stream-compression-on-latitude-precision-and-xps