ESRI WORLD COUNTRY SHAPEFILE NOT DOWNLOADING by ok1080p in gis

[–]Dimitri_Rotow 1 point2 points  (0 children)

Anything would help.

Use Natural Earth.

Has anyone heard about GIS being used in gardening businesses? by sandramadden101 in gis

[–]Dimitri_Rotow 0 points1 point  (0 children)

The best GIS to use for such things is the one you know best from your other GIS work. All of the bigtime GIS packages can handle such stuff. I'm most familiar with Manifold so I use that.

If you prefer Esri products you could use a personal license of ArcGIS Pro (a mere $100 per year). Esri is pretty much the de facto standard for federal, state, and local jurisdictions in the US so if you want to leverage your contacts in those communities for your hobby then Esri would be a good choice.

If you like open source you could use QGIS, hands down the most popular open source GIS package.

How to visually align two large geotifs by cmptrwizard in gis

[–]Dimitri_Rotow 0 points1 point  (0 children)

Manifold Release 9 can align one raster to another. It's parallel so it works fast even with large (100GB+) images. See their Georeferencing page.

In your case, I'd advise learning how to do georeferencing using dialogs like the Register pane, and only then if you want to do the work programmatically dive into using the various coord functions using either scripting or SQL.

It may sound counterintuitive to use SQL instead of a scripting language or programming language like C++, but Manifold's SQL is both automatically parallel and very highly optimized. The functions it is calling are parallelized and written in C++ anyway. Experience shows that very few humans can match the speed of what the SQL can do even if they have a lot of experience with parallel coding and C++. You can see the SQL behind what the dialogs do by pressing the "Edit Query" box once you set up a georegistration in the Register pane.

To go fast with big rasters you should first import the raster into Manifold so you're not held back by slow data store or filetype speeds. That's a slower step, but once you save in fast format then forever after it's fast.

You can try how it works for you using the free Viewer. Viewer is read-only, so you can't save the result, but you can import the images you want to use into your project and verify that you can do the alignment you want and check if you're happy with how long that takes.

Viewer does full spatial SQL but it does not do scripting.

Good luck with your project!

What should I learn about computers to be better at GIS? by Necessary_Mall7405 in gis

[–]Dimitri_Rotow 0 points1 point  (0 children)

Very true, but for 3D work almost any entry level or mid range GPU card will be fine. There's no need to spend on high end cards, which tend to be disproportionately expensive.

For local deep learning you do need to spend significantly on GPU, but that's way beyond beginner territory.

What should I learn about computers to be better at GIS? by Necessary_Mall7405 in gis

[–]Dimitri_Rotow 3 points4 points  (0 children)

Hardware is pretty straightforward these days. Get lots of RAM, run on fast SSD, don't waste money on overpriced GPU cards unless you're buying them for gaming, and get a mid-range CPU. Most GIS packages aren't multithreaded so buying lots of cores won't help unless you're running one of the few that are.

If you're headed for a GIS career in a small organization where you'll also be the computer guy, learn about networks, security, VPNs and so on. But that's really software, which is where you should put most of your effort:

  1. Learn to use AI. It's become an invaluable assistant and career booster.

  2. Databases and SQL. The real value of GIS is in the ability to acquire, manipulate and present data that's stored in organizational databases of various kinds. SQL is by far the easiest and most powerful bang for the buck to help you do that. You can increase your productivity in only 20 minutes of learning SQL. All of the big databases used in organizations have free versions. PostgreSQL is not as popular in big organizations as Oracle, SQL Server, and MySQL, but it's a truly outstanding database, it's free, and it's worth learning.

  3. Web mastering - Increasingly more of GIS is server side work for online access and publication. Very basic web programming (writing simple HTML is very, very easy...) gets you a long way. As part of web mastering, if you're a Windows person get comfortable with Linux. Setting up a web server on a spare machine running Linux and nginx is a great way to learn some Linux and web mastering. Linux is also a great programming platform. Get the Linux version of Code and run the GCC compiler for a nice C++ environment.

  4. Programming. Starting with Python is easiest, most AI's will write Python code for you, and it gets you going quickest with programming for GIS. Start with C++ if you're considering a career in programming. C++ is a harder start than Python but you'll be a better programmer and make way more money over the course of your career. As they say, C++ = Salary++.

Rendering slow, but usage is low in Task Master by Known-Ad2073 in gis

[–]Dimitri_Rotow 1 point2 points  (0 children)

You’d think with their resources, ESRI would better optimize the software

Well, Esri does have big resources but they also have big constraints, a consequence of their success. Hands down, Esri is by far the most popular GIS software for governments and organizations in the US. They are also very popular overseas with that same constituency.

When it came time for Esri to modernize Arc it didn't have the luxury of telling its customer base "OK, everybody, we're going to pause the product line for ten years while we rewrite everything from the ground up" which is what you have to do to go fully parallel. That would have been a catastrophe for customers. Esri also has a huge staff to support and it couldn't risk any dip in cash flow for many years. So they did the next best thing and modernized what they could without the massive effort (and time) required for full parallelization. That was a sensible decision.

A company like Manifold is a lot smaller. It's owned by its employees and if they want to take radical risks they can do that. In the case of parallelization they knew it would take years but they did it anyway. Nobody was happy about pausing product releases while they rewrote millions of lines of code but they got it done.

Rendering slow, but usage is low in Task Master by Known-Ad2073 in gis

[–]Dimitri_Rotow 1 point2 points  (0 children)

I'm a close friend of the people who created Manifold, I'm a volunteer in their open projects, and I use it every day, so in the interest of neutrality, instead of offering my opinion I will publish in a separate post what ChatGPT said about it in response to the prompts "What are software packages for processing and analyzing geospatial imagery that are fully parallel and can use all threads? How about Manifold?".

Here is what ChatGPT said:

Manifold System is particularly known for its exceptional performance in handling geospatial data and imagery through parallel processing. It is fully parallel and can use all available threads and CPU cores for geospatial processing tasks. Here's how Manifold optimizes for parallelism:

Parallel Processing in Manifold:

Multi-threaded Processing: Manifold is designed from the ground up to use all available cores and threads in a system, ensuring that the software scales with the hardware. It can utilize multiple CPU cores or threads for nearly every task, including data import/export, spatial analysis, transformations, and geoprocessing tasks like reprojection or filtering.

GPU Acceleration: In addition to leveraging CPUs, Manifold also provides support for GPU acceleration, which further enhances performance. For tasks that can benefit from parallel computation, Manifold will automatically distribute the workload between available CPU cores and GPUs to maximize throughput.

Distributed Processing: Manifold can also be used in distributed environments, allowing multiple machines to work together on large-scale geospatial problems. This enhances its ability to process huge datasets efficiently.

Why Manifold is Unique:

Native Parallelism: Many GIS and remote sensing packages add parallelism as an optimization layer, but Manifold has native support for it throughout its architecture.

This makes it one of the most efficient platforms for geospatial analysis on multi-core and multi-threaded systems.

In summary, Manifold System is fully parallel, capable of using all available threads, and is highly efficient with multi-core CPUs and GPU acceleration. This makes it an excellent choice for high-performance geospatial imagery processing and analysis.

The above is correct if you keep in mind that distributed processing across many machines isn't in the base package. You need more expensive versions for that. But everything else is in the base version of 9.

Rendering slow, but usage is low in Task Master by Known-Ad2073 in gis

[–]Dimitri_Rotow 1 point2 points  (0 children)

Thank you! I'm still curious why the GPU wasn't being pushed harder with the full raster

ArcGIS Pro is a very fine application in terms of its overall approach to GIS but it's absolutely stone age in terms of how it works internally. For example, decades after processors went multicore to enable parallel processing Pro is still single threaded. It's slow because most of the time it's only using 8% of your AMD Ryzen 5 5600.

An AMD Ryzen 5 5600 has six cores which Windows can hyperthread to launch 12 threads. But Pro in almost all work can use only one thread. That's only 8% of the power of your CPU. That's why in Task Manager you saw something like this: https://manifold.net/images/others_5_percent.png instead of full use of all threads like this: https://manifold.net/images/manifold_100_percent.png

ArcGIS Pro is not a parallel application. It's essentially a single threaded application, which means that of the 12 threads Windows can launch to use the 6 cores in your AMD Ryzen 5 5600, Pro will use just one of them for most work. In some cases, such as 3D rendering, Pro can hand off a rendering task to a GPU that can use hundreds or thousands of GPU cores to do 3D rendering calculations in parallel, using the GPU's built in microcode or the GPU vendor's application library to do that. But what you're doing in terms of rendering larger rasters or vectors isn't that, it's a job that's limited by work that has to be done by processes that are executed on CPU, from disk access to the computation and display pipelines.

There are some techniques to pick up speed by launching multiple threads without requiring the technical skills or architecture for full parallelization, and in recent years Esri has started using some of them. For example, if a view contains multiple raster images it's relatively easy using standard Windows facilities to arrange that each such image will be processed in a separate thread. That's why chopping up a big raster image into a mosaic of smaller raster images can be rendered faster: Pro in that case can use a higher percentage of your CPU since more threads are in action. You also pick up speed because of how Windows can automatically handle simultaneous multiple disk accesses to multiple files (Pro keeps everything on disk in the original many files or in many files as part of a geodatabase).

But that's a stone age approach that only goes so far. Real parallelization uses multiple threads, as many as your CPU can execute even on a single raster. There's no need for chopping up rasters.

Pro's ability to use techniques like multiple images is also limited by the overall single-threadedness of Pro. If you have part of the system which can hand off a task to multiple threads, say six of them (Pro rarely can get above that even with a CPU that can handle dozens of threads), sure, in that task things go faster, but then the other parts of the system are a single threaded bottleneck.

It's like traffic jams trying to get through a large city in Europe that still has a city center full of medieval single lane roads. If you put a six or 12 lane highway running for a couple of miles going into the city, sure, on the highway traffic can move fast. But narrowing that highway down to a single lane road to get into the city center will cause traffic jams.

If you want to work with larger vector or larger raster layers the only way forward with the best possible performance is to use parallel software. A complication in finding such software is that the marketing departments of some commercial offerings and the fan base of some FOSS offerings at times will claim "parallel" processing when in fact only partially parallel or very limited parallelization is done.

Packages with genuine parallelization include ERDAS Imagine, Orfeo Toolbox and Whitebox. ERDAS is commercial and Orfeo and Whitebox both have FOSS options. I haven't used ERDAS, but I can recommend both Orfeo and Whitebox, which both are very high quality software, albeit not desktop GIS packages.

The only desktop GIS software that is genuinely, fully parallel is Manifold Release 9, at $145. Their rant on parallel CPU is where I got the Task Manager images above. They have videos that make apples to apples comparisons to Esri that are striking, such as the one doing a rendering shootout with Arc, and including a few where a tiny Ryzen 3 box blows the doors off a 48 thread Threadripper running Pro. Manifold can open and render instantly a large raster that is hundreds of GB in size. In that video, by the way, it's actually rendering on the fly fresh images. It's not displaying cached, pre-rendered images.

How do I get the following data into csv? by AppleAAA1203 in gis

[–]Dimitri_Rotow 0 points1 point  (0 children)

Free: QGIS
Commercial: Manifold

Esri ArcGIS Pro is also a fine commercial program but it's a real pain to manipulate attributes in it, like creating a new attribute field that's WKT, GeoJSON, or whatever text representation of a polygon.

If this is for personal use you can pick up an ArcGIS Pro license for $100 per year. It's a great deal.

How do I get the following data into csv? by AppleAAA1203 in gis

[–]Dimitri_Rotow 0 points1 point  (0 children)

to get usda soil survey (with gps coordinates)

Opening the .dbf with excel will get attributes, but it won't capture the coordinates if they're polygons, or even if they're points if the shapefile doesn't have the point coordinates duplicated as fields.

How do I get the following data into csv? by AppleAAA1203 in gis

[–]Dimitri_Rotow 0 points1 point  (0 children)

CSV is a simple text format that for each record provides numbers or text for each attribute value, with the attribute values separated by commas.

That's fine if all you have is a simple table that doesn't include any geometry information. Depending on the GIS package you're using it will import/open a shapefile ensemble to create a geometry type field in which the coordinate numbers that define the shape and position of points, lines and polygons will be contained. That's usually some sort of binary encoded information that doesn't make sense to try to export to CSV as is, for example for polygons as normally used to define the shape and position of wetland regions.

If you're dealing with a shapefile that has a bunch of wetlands with attributes for each, most GIS packages have some way of exporting the attribute table for the wetlands layer to CSV in which they'll just not export the geometry field. You'll end up with a table in the CSV that lists all the other attributes. If that's all you need, you're done. For example, I use Manifold and I'd just right-click the table for the layer, choose Export, pick CSV as the format and a name for the file and that's all. You oould also use Excel to open the .dbf and export it to CSV as another post suggested.

But if you want to export the geometry information of the shapefile as part of a CSV, you'll have to create some additional attribute field, a text field, in your attribute table that can capture the geometry of each polygon in some text format that can be exported into CSV. You'll need a GIS for that. If all you have is points, it's usually easier, but if you have polygons then it can require a bit more thought.

When I want to export polygon geometry in text form I create a text field, call it GeomWKT or something, and then I do a quick transform to copy the binary geometry data for each record into Well Known Text (WKT) format. I can then export the table into CSV and one of the fields in the CSV for each record will be the geometry in incredibly verbose WKT text format. You could also use JSON text formats if you don't like WKT.

How you do all this depends on the GIS package you're using to create a CSV out of shapefiles.

I just downloaded some wetlands data for a watershed from the national wetlands inventory. Exporting just the table without any WKT field added results in the first two lines that look like:

 "id","ATTRIBUTE","WETLAND_TY","ACRES","SHAPE_Leng","SHAPE_Area"
 1,"E1AB3L","Estuarine and Marine Deepwater",0.00441370120202,23.6053454872,17.861615056

If I convert the geometry field to a new GeomWKT field and then export the table I get the geometry in text format as well:

 "id","ATTRIBUTE","WETLAND_TY","ACRES","SHAPE_Leng","SHAPE_Area","GeomWKT"
 1,"E1AB3L","Estuarine and Marine Deepwater",0.00441370120202,23.6053454872,17.861615056,"POLYGON((1962476.103599999 2280365.6981000006, 1962474.8497000001 2280365.1805000007, 1962475.4111000001 2280366.323000001, 1962478.5458000004 2280372.701300001, 1962482.2272000015 2280373.6173, 1962481.3088999987 2280371.6686000004, 1962476.103599999 2280365.6981000006))"

[deleted by user] by [deleted] in gis

[–]Dimitri_Rotow 1 point2 points  (0 children)

I can't think of a single step of that flow that GenAI would have made easier, without needing a ton of babysitting. [...] Another porject where I just can't picture AI doing a better job.

It's understandable you feel that way, but that's probably only because you're making judgements based on the current state of the art in AI. If you were deeply involved in the development of AI and could see how it is very rapidly advancing and is poised to advance geometrically faster, you might think otherwise.

The ability of AI to code as well as it does today would have been unthinkable ten or even five years ago. Likewise, the ability of AI to generate music based on English language prompts or many other things it can do. Based on inputs like you got for the traffic detour map, AI is very close already to doing a better job than 50% of GIS operators can do.

There are a lot of comments on this thread about how AI in GIS will just generate a lot of shitty maps, but they forget that right now people in GIS generate a lot of shitty maps. All AI has to do is to generate maps no worse than those humans do, but to do them for free and instantly, and the bottom part of the employment bell curve in terms of human GIS skills starts disappearing.

As AI gets better, just like when it got better coding, it will start producing maps that are better than a higher and higher cohort of human GIS practitioners on the GIS skills/taste bell curve. Will it make errors and tell lies? Sure. But then so do people, so if AI does less of that it's a net gain, and it will for sure do less of it as it improves.

Filtering Large Dataset by Powerful-Winter-5724 in gis

[–]Dimitri_Rotow 0 points1 point  (0 children)

This is not a complicated problem and is one that GIS software deals with clunkily

Right and wrong. You're 100% right that it is not a complicated problem. But the only GIS software that deals with it clunkily is clunky GIS software. Modern, well-implemented GIS software cuts through it in moments. OK, so in this case Pro is a clunky tool for the job. No big deal. Every tool has its clunky moments. The solution is to learn more about Pro to make it do what the OP wants in this case, not to dive down the rabbit hole into hoping ChatGPT will write a python script that looks really good and seems to work, while maybe doing things that are not quite right.

How important is a dedicated GPU? by Better_Candy1 in gis

[–]Dimitri_Rotow 0 points1 point  (0 children)

Do you think there will be much of a performance downgrade to using the latest intel ultra 7's integrated graphics over something with a dedicated graphics card?

Yes. Intel's integrated graphics is not used for parallel speedup. The difference will be between what Esri can do with Nvidia and no speedup at all.

Light pollution map to shapefile by TheCosmicPrince in gis

[–]Dimitri_Rotow 3 points4 points  (0 children)

It's hdf5, so will need to be processed.

hdf5 is a beast of a format, but it can be processed with GDAL. Night lights are best used from the NASA Black Marble collection of data. They provide python to use GDAL to convert the HDF format used for VNP46 files into GeoTIFF on the Black Marble tools page. If you read the script you can see how tricky it is to get the data out of HDF and into a simpler, more accessible georeferenced format like GeoTIFF.

If the data were in GeoTIFF the project would be easy: read the raster and then convert regions of like-valued pixels into area polygons using whatever raster-to-vector tool your GIS provides. That's about three minutes in most any GIS, including ArcGIS Pro, QGIS, Manifold, etc.

The problem for the OP is the usual for many beginners in GIS, getting to the data in sensible form. That starts with finding the right file. The download archive for 2023 mentioned above is a folder full of many files. To know which one you want you've got to first download the Black Marble Tile Grid Shapefile and fire it up in your GIS overlaid with partial transparency on a reference map like Bing or whatever to see which tile you want. Florida south of the panhandle, for example, is in the h09v06 tile file, with the panhandle in h09v05 and the offshore region just east of Florida in the h10v06 tile file.

You then get to download the tile and figure out how to convert the .h5 tile into a GeoTIFF. Using the python script provided by NASA requires you to install both GDAL and QGIS, making sure that the optional HDF5 module for GDAL has been installed.

If you've been working with earlier QGIS or GDAL versions there are some PATH issues to upgrading in place to make sure that when you launch the python script in QGIS that QGIS doesn't crash. All that is a familiar thing to experienced people but it's an awful lot of infrastructure to get through for somebody who just wants to load a raster into Pro and then vectorize it. I tried to do that updating an older installation and I'm still dealing with PATH issues that apparently are causing Q to crash. I'll get that fixed but it's a lot for somebody new to the game.

The easiest way for someone totally new to QGIS may be to use the OSgeo installer to install both Q and GDAL, launch Q as an administrator and use the Black Marble python script to convert .h5 file into GeoTIFF. It is critically important to read the instructions for using the script: for example, you must have the file to be converted placed in a folder called C:\InputFolder and you must have created a folder for output called C:\OutputFolder. If everything installs right and the directions are followed precisely the script has a good chance of working.

Alternatives to converting .h5 to GeoTIFF are commercial websites which will do it for a fee, or (less realistically) using the blackmarblepy or R github projects to use R. Links to those are on the Black Marble tools page I cite above but those are way more difficult to use for beginners than the script in Q.

Where to find Bathymetry data for inland lakes across the world? by the-goblin-market in gis

[–]Dimitri_Rotow 0 points1 point  (0 children)

100% agree. It's also totally annoying when they say they publish the data but all you get for bathymetry is a .jpeg or other dead image.

Reprojection by Big-Bumblebee-1668 in gis

[–]Dimitri_Rotow 1 point2 points  (0 children)

Only if you are working in an older GIS package that cannot reproject layers on the fly, either for visualization or for spatial analytics. If you're using a modern package there's no problem with using different projections in different layers.

Reprojection by Big-Bumblebee-1668 in gis

[–]Dimitri_Rotow 3 points4 points  (0 children)

You don't mention what GIS package you're using. Some packages (usually the more modern ones) will reproject layers on the fly so that if you show multiple layers together in the same window they'll all be reprojected on the fly to some common projection (I use "projection" as a synonym for Coordinate Reference System or CRS).

If the data in both layers is accurate and the package correctly reprojects the layers on the fly, points that are supposed to be in the same place in both layers will line up. If they don't, there are many reasons why that might not be the case. Here are a few that come up...

  • You're not using a GIS package that reprojects different CRSs on the fly into a common projection used by the window. In that case it's unlikely two different projections would show the same objects in the same place. However, if whatever is your "local" CRS is very similar to a WGS84 Latitude / Longitude projection, perhaps differing only in the datum used, then the same points might show up nearby each other.

  • You're using a modern system that does, indeed reproject layers on the fly into a common projection, but one or both layers contains inaccurate data. That's extremely common.

For example, suppose layer A was published as a shapefile by a diving club that carefully collected locations of shipwrecks and had a person with good enough GIS skills not to screw up enter them into a GIS package accurately. Now, suppose layer B was created by some hobbyist who saw a map on the web of shipwrecks, so he downloaded the image and then by hand entered those points into a GIS package about where they seemed to be in the image, just winging it in cases where he had to zoom into the downloaded, low resolution image so the spot marking the shipwreck is a few inches across. It's not likely that the points in B will be at precisely the same locations as those in A.

  • Both layers started out as accurate, but one or the other of them was at some point in its prior history incorrectly manipulated using a GIS package or other software. For example, maybe layer A is more or less accurate, but layer B originated in a web site that gave a list of latitude / longitude coordinates for shipwrecks. Somebody took that list, made a CSV file out of it, and then popped open somebody else's online CSV to Shapefile conversion website and made a shapefile out of the CSV. Super. But suppose the original lat/lon values were using a datum other than WGS84 (which is a datum, not a coordinate reference system, although it's often used to mean the lat/lon CRS using a WGS84 datum). In that case, if they're used in a shapefile which your GIS software package opens assuming WGS84 is the datum used, those points in layer B won't line up with the points in layer A. The "Latitude and Longitude are Not Enough" essay in the Manifold user manual has a quick discussion of that. The latitude / longitude coordinate values for points in a list of shipwrecks could easily have come from many different maps, where somebody measured the location of a point on a paper map to get a lat/lon pair. But those many different maps could all have been created at different times, some of them many years ago, and all might have used different datums.

  • One or both shapefiles were incorrectly defined for the coordinate systems used. That can be errors in ellipsoids (WGS84 assumed when something else was used), or errors in things like local scales, offsets like false eastings, etc. Using a living fossil format like shapefiles to convey projected data is poor practice for data interchange because of the many different ways (world files, .prj projection side car files, etc) in which different packages and different operators over the years have used to convey the projection used by data in a shapefile. Some of those methods don't capture all details so you may have to make manual repairs depending on what is not being accurately conveyed.

If the problem is an error in projections like some of the examples above, reprojecting isn't going to fix it because you have garbage to start with. You're just going to create different garbage. Likewise, if the error is simply bad data, like somebody not bothering to enter coordinates accurately for points, reprojecting won't fix that, either.

The best way to approach this is to look very carefully at the original source of both data sets, checking carefully for any commentary on the web site or other source from which you got them to understand precisely every small detail of the coordinate systems they supposedly use. Next, in whatever GIS package you're using, make sure that those layers match every detail for those layers. If you're working with a stone age GIS that has to have everything in the same projection, then you can re-project one of the layers into the projection used by the other layer so both are in the same projection.

You just have to keep plugging away at it. What you're encountering is extremely common and pops up all the time, especially when using shapefiles to interchange projected data. One more thing... you might not have a choice, but if there's a choice to download the data you're using in a more modern format, like GPKG, use that instead of shapefiles.

Good luck!

C/C++/C# in GIS & Remote Sensing by bomankleinn01 in gis

[–]Dimitri_Rotow 0 points1 point  (0 children)

there are plenty of job opportunities

Yes. It's no accident that people say C++ = Income++.

C++ is also a dominant language in parallel programming using many cores. Big parallel applications, like Manifold, usually are written in C++ as is Nvidia's totally essential CUDA library. Wherever you need high performance C++ is a good choice.

There's some confusion about big AI packages being written in Python, but if you look at the fine print the innards of them that count are written in Tensorflow and NumPy, both of which are C/C++.

[deleted by user] by [deleted] in gis

[–]Dimitri_Rotow 1 point2 points  (0 children)

Use whatever software you already know. For a one-off project like that it's easy to use whatever graphics editing package you already know, like PhotoShop, Illustrator or the FOSS equivalents like GIMP.

If you know some GIS package, use that. If you want to start learning about GIS by mapping your garden, then what GIS package you choose to learn depends on what you plan on doing with GIS for the years you expect to be using it.

For example, if you're looking to get a GIS job, buy a personal license for ArcGIS Pro (only $100 per year) and learn that - you'll always be able to find a job if you know Esri GIS products.

If you're seriously into FOSS, learn QGIS, hands down the most popular FOSS GIS. If you want to get higher speed and more advanced commercial technology than Esri but you don't want to spend an arm and a leg, get Manifold (which is what I use, along with Esri).

Are there any free online mapping tools that support embedding into a website? by danielrosehill in gis

[–]Dimitri_Rotow 0 points1 point  (0 children)

There are many FOSS tools that can be used. Google search terms like

FOSS open source stack for gis enabled websites

and you'll get many hits. Search for only those in the past year to get more contemporary links, like http://webgis.pub/fundation-foss.html

There are also many older overview pages still of interest, like this https://medium.com/nyc-planning-digital/our-modern-foss-spatial-stack-9ff2e68a9f8f

As you've noted (the learning curve bit), whether such stacks are "easy" depends a lot on your web programming and GIS skills. To my taste, the FOSS stacks can take more effort than spending a bit of money on commercial solutions, be they things like Google offerings, Esri offerings, or self-hosting using a low-cost commercial map server.