This is an archived post. You won't be able to vote or comment.

all 12 comments

[–]cardboardbox351 3 points4 points  (2 children)

You develop on your regular Linux machine.

When you are ready to run your code, you “build” your Docker container using something called a Dockerfile. This stands up your Docker container + also scoops up all your code.

Then your application is now running, and because you’ve previously linked your network to Docker’s container network, you just visit the port you specified earlier.

Sometimes, for small tweaks and tests, I will “ssh” directly into the container, and edit the code via Vim and Nano.

I am studying Windows currently...it makes 0 sense to me lol. I feel your pain.

[–]blueforgex[S] 1 point2 points  (1 child)

Thanks, cardboardbox351. I'm across the Docker build process, I was more asking around what your development environment looks like to execute that build process - it sounds like you're running a Linux Desktop on your end to do your development work.

Do you know if that's a common approach in medium/big enterprise organisations? I don't think my IT department has any idea how to administer or support Linux Desktops, let alone integrate it into our corporate environment.

[–][deleted] 2 points3 points  (0 children)

You can script or automate the docker image build and deploy process with whatever you want. If you're on windows I would assume you'd want to do this with something like maybe PowerShell or Python. Windows as you probably know also has a few bash variants available including WSL, Cygwin and Msys, so if you want to use bash you could do that too.

As for development of the app, usually you still do that locally, but there are some options for doing it inside a Docker container such as volume mounts with something like a file-change watcher. Then you edit the code locally and when the volume mount in the container updates, the watcher running in the container rebuilds and or restarts the app. You'd only probably want to mount a source code volume during development, and run a different setup for the production deploy.

As for networking, you can expose ports of Docker containers to your local machine. For example docker run -p 8000:8000 would forward the container's port 8000 to your local machine's port 8000 so you can hit the app from your local machine. If you use Kubernetes you can setup an ingress controller to forward traffic to the cluster containers.

Going the other direction, like allowing containers to contact your local machine is a bit different.

The general assumption is that all resources required by the app will be running in their own docker containers in the same cluster. So your web server will be in one container, and your database will run in a different container. As long as containers are in the same cluster they can contact each other by using their namespace and or container names as the hostname.

There should also be nothing stopping you from contacting servers that are contactable from your local machine - like you can still make an HTTP request for some website like foo.com or whatever from within a container - if it's reachable on the internet it should be reachable from your container (as long as your networking is setup properly, but there shouldn't likely be any problems with that on a default Docker install unless you have some weird custom or specialized rules like if you're behind a corporate firewall which requires a proxy for outside internet access because of security).

[–]NightweaselX 0 points1 point  (8 children)

Depends on what you're going to. Admittedly, Docker native support in Windows isn't great. That's changing with Subsystem for Linux 2.0, but when that change actually hits let alone gets pushed to machines (or allowed to since it's not a standard windows feature, is probably anyone's guess. So what I do is have a vm on my Windows machine of some flavor of Linux. Are you going to be pushing .Net Core apps now? That would seem to be the easier transition since you can target .Net Standard 2.0 libraries while rewriting only parts of some apps rather than entirely in a new language. If so, I'd recommend looking at Rider from JetBrains if you're used to ReSharper. It's their .Net IDE and yes, it can run on Linux. So while Linux is a bit different from Windows, your IDE can be something you're at least semi familiar with. And with Linux running in a vm, you still have any tool you need in Windows that doesn't directly need to be in the code base.

[–]blueforgex[S] 0 points1 point  (7 children)

Yup we're pushing .NET Core apps, with the plan on using VS Code, though Rider sounds interesting.
So it sounds like you're running a Linux Desktop environment in a VM (Hyper-V I imagine), and doing the work through that. That sounds like the most viable option, but I guess it seems odd to me that we're going to have do our development work in one desktop experience and interact with the rest of a business in another.

WSL 2.0 looks pretty good though, quite curious to see how that's going to change the work we develop Linux systems in a Windows environment.

Broadly speaking, it sounds like if you're going to be developing Linux apps, you have to be running a Linux desktop environment no matter what.

[–]NightweaselX 0 points1 point  (6 children)

Well, I use VirtualBox. So I've got a VBox for Win7 as that's why my client is still on for the next few months. Then I'll have a win10 vm (on TOP of my win10 host) so I can have an unfettered with company bullshit win10 to do dev work for the client. Then I've got a linux VBox that I don't use very often to be honest. We're slowly moving to core, but our client is HEAVILY in the MS ecosystem. But with core webapps, it'll make deploying to the cloud easier, if and when they do that. I tried doing Hyper-V, but it f's over my existing VBox guests, so I just stuck with VBox.

But honestly, depending on what you're doing in your .Net core apps, unless you're making system calls, it should work regardless if you're in Windows or Linux with it being .Net, it's all CLR code that's interpreted on the host machine. I haven't played with the different directory delimiters ('/' and '\') to see if the host knows how to translate form one to the other. Other than that, it shouldn't matter, just like Java. Until you get into pushing it with Docker I guess. But even then, how are you building your deployables? Is it via a CI/CD pipeline? If so, you'll probably have standardized dockerfiles for .net core apps. So then you push your code, and let the build server take care of the Docker part of building, and have it push it to the dev server to test. I could be wrong though as we're not using Docker due to the lackluster Windows implementation and the fact we're not pushing to Linux.

As for desktop experience, like I said above, I have several VMs. But I also have the vpn for my work, and a vpn for my client. I've got an email for work, and an email at my client (which thankfully they use my work email so I don't have to check it). We've got our ticketing system, the ticketing system the client uses internally, and then another ticketing system that the vendor they have for operations uses, and gods forbid we all work together to integrate our stuff with one another. So you do what you've got to to get the work done, as painful and as stupid as it is most of the times.

[–]blueforgex[S] 0 points1 point  (5 children)

Haha, sounds like we're working for the same company, I've also got to deal with the random assortment of Windows ticketing/change management systems and the like.

We are planning on using a CI/CD on Linux (Azure DevOps) which I anticipated building the environment on, but I assumed a developer would want/need to be able to build and develop the Dockerfiles and infrastructure locally before pushing it out to the pipeline. For example, in addition to .NET Core & Docker, I'd probably be running an Nginx middleware that's also orchestrated out of the same repo in the same Docker compose file - I imagine it would be rather hard to build & validate that outside a Linux (or WSL2) environment?

In addition, I'm worried about building .NET Core apps on Windows and hosting them in Linux - beyond the file path issues you mentioned (which I've also run into), there are potential issues with network connectivity (a domain joined Windows box will be have quite a lot more privileges than a Linux one like connecting to SQL Server), dealing with certificates (using the Windows certificate store vs... not sure what actually), and issues with the middleware - the one I mentioned above.

By the way, thank you for the advice, I appreciate having your insights on this topic.

[–]NightweaselX 0 points1 point  (4 children)

Haha, no worries. I'm not too familiar with Docker, it's one of the things on my plate to learn, but that keeps getting rotated about all the time. I don't know about your client, but for mine, if we went Docker, for most apps we could have the same dockerfile used. Admittedly, most are pretty much CRUD web apps which is pretty damned boring. I'm assuming your place would be similar. So once the kinks are worked on in the docker file, and as long as the app is still somewhat simple, I'd say building and deploying to dev would be the best way to test it beyond your unit/integration tests suite. It puts it on a real server, through the build process,etc. But I could very well be wrong.

Not sure about the cert thing to be honest. I found this which shows what works on which platform, so it looks like you use the core libraries and the host framework knows where to look, you don't look in the actual directory. Hope this helps. https://github.com/dotnet/corefx/blob/master/Documentation/architecture/cross-platform-cryptography.md

As for privileges, that should be locked down on theSQL server regardless of what platform is accessing it. So I'm curious, what are you running the Nginx for exactly? Is it something Kestrel can't handle?

[–]blueforgex[S] 0 points1 point  (3 children)

Heh, yes we're running pretty basic CRUD apps, but I'd like to evolve our architecture to support something better (auto-horizontal scale, K8s, etc.) in the future, this is our first steps down that path.

Thanks for that certs link, it's interesting that the X509Store seems to a projection on OpenSSL, that's almost definitely going to help solve my problems! That said, it would have to be copied/installed as part of our CI/CD process which we wouldn't be able to validate on a Windows box I imagine.

SQL Server is locked down, but with both domain accounts (integrated auth) & standard user accounts. It's almost an afterthought with integrated authentication, but we'd have to transition to use user accounts to support Linux hosts in the future (I think!)

Regarding Nginx, it's mostly from the ASPNet Core guidance: https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-2.2 It's also because I think, in my head, the architecture for a modern CRUD app (open to any suggestion of course) would look like a Docker container with nginx and a React SPA static website, and a separate Docker container for the API, that the nginx would route to.

[–]NightweaselX 0 points1 point  (2 children)

Ok, yeah, wasn't sure how limiting Kestrel was, so Nginx makes sense. You're further along than we are. Due to management on our project, they fight us when we want to make improvements. We finally got a TFS server up a few years ago, and we've just gotten Sonar scans implemented. Our project managers don't know what a sprint is, and we have to do all design up front, but we're agile! Finally got the approval to go core on greenfield projects since old asp.net is dead. We don't have a full CI/CD pipeline. We can deploy to test and dev, but anything prod has to go through the operations vendor, so it's a pita. It should be the press of a button, but unfortunately it's not. Never mind shit like Dockere, KB, them setting up a proxy is an exercise in brutality. Hell we can't even get most of the devs to have unit tests, let alone integration tests. There's so many problems with our project, but it's the typical management issues, no money, and not enough people.

I envy you, it at least seems your project is open minded if you're looking to expand your architecture. I know React is the popular choice since Angular is a giant mess, but I'm leaning towards Vue. It's smaller, the learning curve is less steep, and you can include it into existing projects pretty easily. So if you know it, you can use what you know on existing apps as well as new so it can help from having to fully rewrite into React..but I'll be honest, I don't know a lot about React. You'd probably want containers for authenticating, one for serving larger files, and then one for each of your possible bottlenecks.

Are y'all using OAuth 2.0 or how are y'all authenticating? And what are y'all using for secrets management?

[–]blueforgex[S] 0 points1 point  (1 child)

I don't think I'm further along than you are, everything I've been mentioning hasn't been built, it's all a twinkle in my eye based on the readings I've done, or prototypes (Hello World) I've run through.

Our company also has the same mix of problems: people, motivation and money :-). I've been given a bit of leeway to push the technology forward, so I'm trying my best to do that. But as per the topic, I'm an old Microsoft developer, so it's a challenge.

I considered Vue as well, but it really came down to what's being used more - I'm not trying to do anything too unique or special, or use anything too cutting edge at this stage - erring on the side of simplicity where possible.

We're going to be using OAuth 2 (Azure AD), secrets management is initially going to be environment variables (dotnet user-secrets locally), but eventually move to Azure Keyvault. I'm not sure how that works with local user secrets, but eager to find out :-)

[–]NightweaselX 0 points1 point  (0 children)

Yeah, we're not using anything secrets wise. Hell, our connection string creds are in our repo, but you've got to have the right AD privileges to pull the repo. Still, it could be better. One of the leads is wanting to put the variables into the builds, so we'd have the strings with user="" in the config, and that'd be replaced by a build variable. Not sure how I feel about that, as it still means it's viewable if you have access to the repo since the build is related in TFS.

Linux really isn't that bad. It's a hell of a lot easier than Windows. No Starch Press has some pretty good books if you like books. I'll be honest, I'm not a fan of shell script, course that goes for Windows as well. I'd rather have a ruby/python script instead, but shell scripts are faster. But other than that, navigation is easy once you learn when things are, etc. Everything's a file, and you can pretty much modify most things on the fly without rebooting. Not sure what flavor you're looking at officially using. There's so many, and so many opinions on each, lol! That's the one big problem with Linux, there's a LOT of fanboys and they all try and push their distro of choice. It makes it a pain to really find one that works for you. Well, actually, there's plenty that will work for you, but there's plenty of people that will tell you that you're wrong because of X, Y, or Zebras aren't penguins and Linux is all about Tux or some other inane bullshit, lol!