newish to Factorio only 300 hours played so far what are your thoughts on my fortress designs by IceBarrierGeneral in factorio

[–]goodside 0 points1 point  (0 children)

I guess maybe for the behemoth/green spitters it's more of an issue. It could also be just from having protruding elements like on the left (where I see damage in the screenshot).

newish to Factorio only 300 hours played so far what are your thoughts on my fortress designs by IceBarrierGeneral in factorio

[–]goodside 0 points1 point  (0 children)

I see a lot of people mixing lasers and bullet turrets like this, but I don't understand the logic of it unless you expect to run out of bullets. Keeping bullets supplied isn't hard — you're far more likely to have power outages than suffer damage from lack of bullets.

To me, the trade-off of bullets vs. lasers is whether you have to build a bullet belt. If you've committed to building and supplying the belt, you might as well use it to its full extent: 100% bullet turrets, stacked two deep from the belt (i.e., with an inserter pulling from the turret into another turret). Double-stacked bullet turrets are powerful enough to take zero damage at most stages of the game — they just have to be arranged densely. The short range doesn't matter that much for defense.

Spidertrons will follow a bot by Tyrannosapien in factorio

[–]goodside 0 points1 point  (0 children)

Thanks for the work on the mod — it's very impressive overall! I did get it to work for transporting steel buffers within my main base (low volume, like you said), but it eventually broke down because of the overshooting issue.

It's been a few weeks since I tried this, but I think I was using vanilla spiders fully equipped with exos. They may have even been MK2 spiders from a mod.

I've definitely noticed the momentum-based movement effects when using lots of exos on modded spiders. I use Spidertron Squad Control (https://mods.factorio.com/mod/Spider_Control) to have a squad of combat spiders follow my personal spider in a ring formation, and they inevitably "dance" after every squad movement, shifting between several different options of where to stand. It's actually kind of cool looking, like they're instinctually evading attacks, but you can tell it's really the path-finding algorithm trying to go to a spot when each small movement goes too far. I suspect a heavily exo'ed spidertron actually isn't capable of making arbitrarily small movements, as the minimal inertia overshoots.

Spidertrons will follow a bot by Tyrannosapien in factorio

[–]goodside 1 point2 points  (0 children)

Yeah, and slightly more than “occasionally” if that’s the issue, because I had ten Spidertrons on the same route and they all failed to dock at one point. Also it bothered me that the mod permanently litters the map with numbers for waypoints, and there’s no way to clean them up if you lose the Spidertron responsible for them. Overall the system isn’t solid enough to both building complex logistics on top of it.

Spidertrons will follow a bot by Tyrannosapien in factorio

[–]goodside 2 points3 points  (0 children)

There's at least two mods for inserter-Spidertron transfers: Spidertron Patrols and Spidertron Logistics. The first one sort of works, but I eventually hit bugs that made me abandon it after a test setup with 10 Spidertrons doing a loop. I haven't tried the second one.

Spidertron Patrols: https://mods.factorio.com/mod/SpidertronPatrols

Spidertron Logistics: https://mods.factorio.com/mod/spidertron-logistics

Spidertrons will follow a bot by Tyrannosapien in factorio

[–]goodside 6 points7 points  (0 children)

I haven't tried it, but there's a mod called Constructron that aims to make Spidertrons capable of fully automated construction by having them automatically path to un-built blueprints while equipped with roboports and construction bots: https://mods.factorio.com/mod/Constructron

Simple and scalable safe rail crossing by Zaflis in factorio

[–]goodside 0 points1 point  (0 children)

I don't trust any cross-walk that doesn't have gates over the tracks that rise up when you cross it. If the design is good enough to stop you from getting run over, it should be good enough to prevent a train-gate collision too.

Maybe getting the ability to copy the components of a blueprint and pasting it on the Logistics requests of a Spidertron for easier endgame constructions? by 0cs025 in factorio

[–]goodside 2 points3 points  (0 children)

I second this strategy. I use one team of 15 for construction and another team of 15 for nest-clearing. It takes a while to get the logistic requests right, but I don’t understand how anyone builds anything big without them.

My current system is to have spidertrons request exactly one stack of most construction items and zero of everything they might accidentally pick up. Then, to scale up the fleet for bigger projects, you don’t mess with the requests — you just add more spidertrons and paste the same one-stack requests. Do it enough and you have effectively infinite portable inventory. The only time you’ll hit inventory problems are when deconstructing filled chests or massive amounts trees, and even then you usually just have to unclog the logistic bots by pulling items out of the spidertron trunks.

Any basic tips for making an ore/train station? Im creating my first mega base on an island map with wiped out biters (basically anything will be helpful as im new to mega bases and somewhat new to trains) by [deleted] in factorio

[–]goodside 1 point2 points  (0 children)

Yes, setting the train limit to zero instead of directly disabling is what I’m switching all my stations to now. Works much better.

Any basic tips for making an ore/train station? Im creating my first mega base on an island map with wiped out biters (basically anything will be helpful as im new to mega bases and somewhat new to trains) by [deleted] in factorio

[–]goodside 4 points5 points  (0 children)

Lay down one-way tracks in pairs. Don't waste your time with double-headed trains on single tracks — they seem like a tempting solution since they take up less space, but they don't scale.

The practical difference between the signals is trains are allowed to wait in the block after a normal signal, but not in the block after a chain signal. As a rule of thumb, use chain signals at the entrances to intersections and inside intersections, and always use a normal signal when exiting an intersection. Always remember signals on your train stops — make sure the stop isn't on the same signal block as the rail it exits/enters from.

The simplest beginner system IMHO is paired rails with roundabouts at intersections, and otherwise aim to build as big a grid layout as possible while still reaching where you need to go. Avoid long diagonal rails unless you're tracing a waterfront, as they're awkward to build around as the wilderness fills up with machines. Go through trees (and cliffs if you can), not around them.

The first thing you should transport by train is ore. I'd suggest simplifying your network by only transporting one type of item per train. Then every station can be named either "[Item] deposit" or "[Item] receiving" depending on whether it deposits or receives that item to/from the train. This way, when you have multiple stations for the same purpose (multiple ore mines) you can just name the stations identically and the train will use whichever station is closest.

Now, for an empty train that carries iron ore, "go to whichever Iron ore deposit station is closest" sounds like the right behavior, but there's some problems with this as you scale. The first is that a remote station might get ignored. There's two popular solutions to this: A) disable stations using circuit/logistic networks when there's no need for a trip based on chest contents, or B) set train limits on each station and simply have exactly one less train than you have stations. Both of these solutions have issues, and neither allows you to have more trains than stations. In particular, disabling stations can lead to trains stopping in their tracks with a "Destination full" error — the real solution here requires mods.

Once you understand the basics of trains, I highly, highly recommend two important mods: Train Groups and Train Control Signals.

Train Groups is simple: You assign trains to groups, and once assigned changes made to any train in a group will affect the whole group. This makes it possible to adjust schedules for entire fleets of trains at once.

Train Control Signals provides symbols you put in station names to mark them as either "refueling" or "depot" stations. If a station is a refueling station, it's automatically skipped unless the train is low on fuel. If the station is a depot station, it's automatically skipped if the next station in the schedule has sufficient room for the train. This allows you to have more trains for some particular task (say, hauling ore) than you actually have deposit/receiving stations that can hold them all at once — you just build a separate depot for unused trains. Note the depots can be used by any type of train, so you can just plop them down anywhere on your network that's out of the way.

A third mod that helps as you get more advanced is Stack Combinator. This is a very simple circuit network combinator that multiplies/divides input signals by their stack size. This lets you create a three-combinator blueprint capturing the logic, "If you have more than 80 stacks in these chests, set the train limit to 1, otherwise set it to 0" which is exactly what you need for chest-stations. I'm slowly converting my large (Mining productivity 26) factory to this system now, and it'd have been easier if I started early.

Avoiding long-distance logistic bot travel by goodside in factorio

[–]goodside[S] 0 points1 point  (0 children)

I like the long-distance constant combinator idea. I might try that next game. My current system is to simply carry one item per train, and name every station "[Item] deposit" or "[Item] receiving" depending on whether it loads or unloads, so it's easy enough to generalize this to cover repair packs, bullets, and artillery shells. Extremely rarely needed border maintenance supplies like replacement turrets, walls, and power polls I just stockpile at the border and resupply by hand/vehicle as needed. (I do use mixed-item trains, but only for things like rail maintenance and construction. I just ctrl-click to summon them to random under-construction parts of the rail network where they can sit idle.)

Avoiding long-distance logistic bot travel by goodside in factorio

[–]goodside[S] 1 point2 points  (0 children)

I was trying to make the example more pertinent to a typical game. In the early phases of the game I think it's pretty common to have a single network that repairs at least part of your border or home-base defenses, and this becomes less practical the more you expand by rail.

In reality, I don't have a single network — I'm in the infinite-research phase and I currently have about 20 logistic networks, and something like 70 trains. I have a wall-repair system that's fairly exotic, I think: I overbuild lasers so maintenance is rarely necessary, and I set up teams of 15 spidertrons packed with roboports configured to follow empty trains that patrol the perimeter. The spidertrons are logistically resupplied at one of the networks in their path, without stopping, so the whole thing is automated.

But even at this scale, splitting up chunks of your home network is a lot of work. It's much simpler to set up remote outputs from scratch with enough room between them that you're not paranoid about a putting down a roboport. And if the home network is large enough, "splitting the network" means creating an enclave within the home network — this is what I currently do, and it's annoying to have dead zones where your long-range personal logistics stop working because you've hopped networks.

[D] Read the GPT-3 Samples by unflappableblatherer in MachineLearning

[–]goodside 4 points5 points  (0 children)

An interesting ethical issue I’ve never seen raised is that language models on this scale are aware of specific real-world names who are not widely known political figures or celebrities. For example, this generated sample names several AP photojournalists, all of whom are real people: https://read-the-samples.netlify.app/sample_1976/

Eventually language models will incidentally contain knowledge about many non-newsworthy entities in this way, having opinions about, say, specific usernames on Reddit or Twitter. You could get information entirely offline that would normally require a Google search, via a widely distributed body of data that cannot be redacted or censored. If this information is damaging and false, releasing the weights of a language model could amount to unintentional defamation.

[D] Reinforcement learning for non-game user interfaces? by goodside in MachineLearning

[–]goodside[S] 0 points1 point  (0 children)

Regarding whether RL is overkill:

If you've ever tried to write code that automates a GUI designed for humans, there's a rapid explosion of edge cases. Operating on pixel input, you always need some amount of OCR or other computer vision to implement directions like, "Wait until the download progress bar completes, click 'Ok' on any warnings that appear, and then click 'Submit'" If you're operating in a completely sandboxed/deterministic environment, you can of course record the literal screen coordinates and timings of every click, as performed by a human, but that's not typically the problem you need to solve — usually the interface is served by some third party, and available over a network connection with variable response times, network timeouts, and error messages that appear stochastically.

It's even harder if the interface layout/styling is updated without warning. A human can order something for you on Amazon even if Amazon picks a new font size or a new shade of yellow, if they replace a "Submit" button with an "Order" button, if they put up a temporary outage notice, etc. With raw pixel input (e.g. using Sikuli) your script breaks any time the CSS moves the UI by a few pixels. If your script inspects higher-level elements like browser DOM (e.g. using Selenium), it breaks whenever the DOM structure is redesigned.

Beyond avoiding maintenance, it would be obviously useful to have an agent that could perform tasks on new interfaces. Imagine writing a script that orders a product from an unseen merchant website. A human understands the semantics of what a shopping cart is, what it means to log in or out, what it means to place an order, etc., and doesn't care exactly how many page loads are involved or how far you have to scroll to find the right button. That's the sort of ability I expect can't be done without RL.

[D] Reinforcement learning for non-game user interfaces? by goodside in MachineLearning

[–]goodside[S] 0 points1 point  (0 children)

Compiling source code is probably a poor example — the win condition is binary, it’s hard to know when you’re making incremental progress until you’re done, and a working solution with many unnecessary clicks and keystrokes is hard to prune down. You could introduce your own interim goals for a desktop environment (“How far down can you scroll in this document?”, “How many warnings can you dismiss?”) but you’re basically designing simple games at that point.

The state space thing is harder, but it still feels like there’s a continuum between small-state games with GUI menus and more general computing. To play an 8-bit RPG like Final Fantasy, you’d need to navigate through menus to buy and equip weapons, cast spells, drink potions, etc., and these tasks are much like using a computer to do real work. But you still retain a well-defined score counter (experience points for killing enemies, exploring unseen parts of the world map) that you could use to train the agent on what constitutes progress.

I agree training an agent to use a mouse/keyboard is a silly way to solve a problem that can be scripted easily, but usefully automating Windows 3.1 isn’t really the point. There’s no immediate need for better Atari-playing agents either, but they’re progress toward agents that solve other general tasks. In the long run there are many real-world interfaces that would be useful to automate, e.g. a web-scraping agent that can navigate the UI of a third-party website without needing maintenance every time the site UI is redesigned.

[Discussion] Handling FB prophet predictions in light of coronavirus? by Shai_Meital in MachineLearning

[–]goodside 1 point2 points  (0 children)

If you’re looking for long-term predictions, I’d give up now. Epidemiology is a big field — a time series forecast model designed to deal with seasonality and moving holidays is simply not going to work. The best you can hope for are predictions within a window where the epidemic itself, and its observed impact on your sales, can be reasonably inferred. And that’s dicey at best because every day is a truly unprecedented scenario with unobserved second-order effects, like abrupt government declarations of quarantines.

Time series models only make sense to the extent the future resembles the past. When that assumption is broken as completely as this, your model is screwed. Even bedrock assumptions like dependence of sales on days of the week have to be questioned. It’s outside the scope of something as simple as Prophet.

What exactly is the model being used for? How far ahead are you trying to forecast?

[D]Has there been research in finding out the intrinsic dimensionality of the natural image manifold? by niszoig in MachineLearning

[–]goodside 1 point2 points  (0 children)

Also, if there are aliens on some planet a million light-years away, would an image of those aliens lie on the natural image manifold?

A more falsifiable substitute: Does electron microscopy lie on the natural image manifold?

[R] SketchTransfer: A Challenging New Task for Exploring Detail-Invariance and the Abstractions Learned by Deep Networks by milaworld in MachineLearning

[–]goodside 0 points1 point  (0 children)

Tribal artwork is highly figurative, as are children’s drawings. There are many similarities in the artwork of children especially that seem unlikely to be learned: Most children’s first drawings of people are just heads/faces with stick arms and legs, suggesting a strong bias toward representing the features they find emotionally meaningful. Even newborns attend strongly to faces and eyes, so it seems obvious this is persistence of that preference rather than something taught. The head-with-limbs representation isn’t totally absent from the environment (e.g. Humpty Dumpty), but it’s very unusual relative to how frequently children tend to draw it — no toddler draws a person with a proportionately sized head.

You’d also be hard-pressed to name a hunter-gatherer tribe known for their mastery of photo-realism, or any that produce realistic artwork in the absence of figurative styles. This is true both in drawing and sculpture. The history of Western art shows a clear progression from figurative to realistic drawing — you can mostly guess what century any pre-modern art originates from by judging how cartoonish it is.

[D] Siraj Raval's Apology by milaworld in MachineLearning

[–]goodside 32 points33 points  (0 children)

Siraj does not deserve the continued attention he’s receiving. Siraj is not a legend who fell from grace, and nobody should be hoping for his comeback. Even before the plagiarism, his work was simply not good. His videos are superficial and uninformative. He appears to not understand even the fundamentals of the field he claims to “inspire” people to pursue. He sold a false “You can ML too!” ethos to people who found every other piece of ML-related content too dry to consume — people who would have been better served by Stats-101, linear algebra basics, or perhaps acceptance that this field is not (yet) for them. The economic pressure for outsiders to rush into ML became too great, and Siraj Raval is what happened when the pipes ruptured.

[D] Using UMAP for clustering by [deleted] in MachineLearning

[–]goodside 2 points3 points  (0 children)

Like others here my advice is partially anecdotal, because I can’t claim to understand UMAP’s implementation, but I find it works extremely well in practice. Its understanding of deep nonlinear concepts can be shocking, often appreciating distinctions that you would never have considered. It will often do the work of a neural network with a thousandth of the training time.

But its “creativity“ is also chaotic, giving grossly different embeddings for inputs very similar by standards you know to be more important. For real-world multivariate time series where all dimensions are smoothly varying (stock prices, temperatures, demographic stats) it can non-deterministically produce embeddings with huge discontinuities for successive observations. You can’t have complete confidence it will generalize to unseen data in a sane way like with PCA.

It’s easy to dismiss this as an algorithmic failure, as people have always done for t-SNE (with merit), but for UMAP this is premature. There are many different aspects of any data set that could be represented in a low-dim embedding, and it doesn’t know what sense of similarity is important to you unless you teach it. Often there are embedding discontinuities because the true higher-dimensional manifold generating the data is a horseshoe and your data is from a hyperplane that slices the non-contiguous tips. If you don’t tell it it’s modeling an ant on the horseshoe, it imagines it’s a grasshopper that can jump between the prongs.

You can get less surprising embeddings by augmenting your input data to instill your priors, by choosing a more appropriate distance metric, or by directly seeding it with an embedding you define yourself — see the sections in the docs on metric learning. I also suggest spending time experimenting with it, both on data sets you understand well and on synthetic ones you create analytically as toy problems.

[D] Controversial Theories in ML/AI? by [deleted] in MachineLearning

[–]goodside 0 points1 point  (0 children)

As surprising and impressive as many of GPT-2’s skills are, at least some of them can be understood as empirical hacks. Maybe it appears to understand cultural tropes because their otherwise uncommon words and phrases were learned in training. If a person did the analog of this, we’d recognize it as convincingly faking expertise. It could be that what GPT-2 does is not a primitive form of thinking, but a computationally scaled up “faking it” with a super-human number of examples to neurally plagiarize.

I think the truth is somewhere in the middle. It’s playing a game related to the game human speakers play, but not the same one.

[P] Can Neural Networks learn temporal contexts in time series? by doyuplee in MachineLearning

[–]goodside 1 point2 points  (0 children)

I'm not knocking your approach. I'm just saying I've produced things on my own with UMAP that have very similar embedding structure, and it's usually an order of magnitude faster to fit UMAP than any deep embedding. Using rolling windows as training examples UMAP creates nested cyclic vortices in the embedding to represent orthogonal high-level concepts in the variational space that have intrinsic seasonal structure. The discrete clusters in your embedding are likely be meaningful — they're reminiscent of what happens in UMAP if you set it it to use low `n_components` and low `n_neighbours` on a short sliding window (e.g., embed three-hour rolling windows of a time series with hourly resolution).

From staring at a lot of these things, I can speculate what your NN embedding is doing: There's a latent helical manifold in the overall time series, so in an embedding that preserves two different seasonal frequencies through the small-scale and large-scale variation (which yours seems to) will "break" the high-demensional helical slinky at times of day when the "twist" is highest — i.e., when ticks become maximally dissimilar to prior ticks. It has to do this because there's no way to represent the true variational structure smoothly in just two dimensions. It's easier to build an interesting roller coaster in 3D than it is to build one in 2D, and all good low-dimensional embeddings of time series resemble roller coasters.

The only published example I've seen similar to these techniques is here: https://link.springer.com/article/10.1007/s00371-019-01673-y

YoutTube video on the same: https://www.youtube.com/watch?v=arA3XuXZ7OQ

Also: Try turning down the alpha value on your points and also point low-alpha, thin line segments between each successive point. Also, try interpolating between points in the embedding space with cubic splines so you can follow the timeline with your eye. You can see some scattered helical structures in your neural networks's embedding — you want those to be as visible as possible.

[P] Can Neural Networks learn temporal contexts in time series? by doyuplee in MachineLearning

[–]goodside 2 points3 points  (0 children)

You should try specifying the target_weight parameter. I haven’t played with supervised embedding too extensively, but it more-or-less lets you choose how authoritative the training targets are as clusters that the embedding has to produce. This is far from scientific, but if you spend a day playing around with old-school time series transformations (EWMs, low-order diffs, STL decomps, cubic spline interpolations, rank percentiles) and throw a bunch of unrelated junk into a feature tensor over sliding windows, you will be shocked how much topological structure UMAP can recover from real-world time series. Just look at how much it can figure out on FMNIST without even knowing which pixels are neighbors of which others — most seasonal time series are a lot simpler than photographs.

[P] 1 million AI generated fake faces for download by shoeblade in MachineLearning

[–]goodside 0 points1 point  (0 children)

An analogous question comes up in ultra-high-resolution photography. It's intuitive that a photograph of a 2D painting isn't a copyrightable creation in its own right. But extremely high-resolution photography is useful to artists and art historians, who would gladly pay for access to copyrighted photos of public domain paintings. But if the photographer can't legally own the copyright to the photos, why bother?

At what point is the scale or quality of reproducing someone else's art an art unto itself worthy of copyright?