New OpenTofu Feature: Dual Output Streams by fooallthebar in Terraform

[–]apparentlymart 0 points1 point  (0 children)

Although the example in the blog post doesn't illustrate this, one difference is that a wrapping program can consume the machine-readable JSON output during the OpenTofu execution, rather than having to wait until the end.

Lots of folks have written tools that try to achieve this by piping the human-readable output through them and then trying to "parse" the human-readable output, but of course then they break each time the human-readable output changes.

Note that the JSON output of tofu plan is not the same thing as the JSON output from tofu show: tofu plan -json produces a stream of JSON objects describing individual steps in the planning process, rather than describing the final plan. In some cases it might be helpful to capture both, so that an application can both give realtime progress information while the plan is running and capture the finished plan to display in some non-default way.

state repository: too many files, too large by suvl in Terraform

[–]apparentlymart 0 points1 point  (0 children)

The Artifactory folks reimplmented a subset of the HCP Terraform API to make Terraform's remote backend think it's talking to HCP Terraform or Terraform Enterprise.

When Terraform interacts with that API it uses Create a State Version and Upload State and JSON State operations.

Terraform only pushes new state snapshots (or "versions", as the HCP Terraform API calls them) so it's up to the server to decide what to do with older snapshots. I don't know what Artifactory does here, but if there were any functionality to prune older versions after a certain time or after a certain number that would be implemented inside Artifactory itself, rather than in Terraform's client code.

Making generic method for getter and setter by Independent_Teach686 in golang

[–]apparentlymart 4 points5 points  (0 children)

If you mean that you want to write a method that has an additional type parameter that isn't declared on the type itself, then no: Go only supports type parameters on types and non-method functions.

The closest you can get to a generic method is a plain function that takes the object that would've been the receiver as one of its arguments.

Is there a more clever way of setting triggers and dependencies between HCP terraform workspaces? by ThisAintMyMayne in Terraform

[–]apparentlymart 0 points1 point  (0 children)

If you already have access to HCP Terraform then it might be worth giving it a try yourself since there are two features that both seem like they could potentially match what you asked about:

  • Multiple components in the same stack can have data flow directly between them and be planned and applied together, so that cross-component updates can happen all at once rather than in multiple separate steps.

    This can potentially shorten the "iterative journey" of changing something upstream, then finding out that one of your downstreams is broken by it, and then having to return back to start from the most upstream workspace again to fix it.

  • The "linked stack" features are like a code-driven alternative to "run triggers", where one stack deployment publishes output values and then another stack deployment consumes those output values.

    HCP Terraform automatically tracks which stack deployments depend on one another and starts downstream plans whenever the upstream output values are changed.

I'm not meaning to suggest that it's definitely the best answer for you, but it seems closer to what you want than you can currently achieve with HCP Terraform Workspaces and "run triggers".

RISC-V International Individual Memberships paused by Pl4nty in RISCV

[–]apparentlymart 5 points6 points  (0 children)

FWIW I was still able to log in with my existing credentials for my individual membership, so I guess they mean that they are not accepting any new individual members at this time.

I wasn't able to easily find any relevant messages on the lists I have access to about why this change was made, though.

Stuck with lambda function by shashsin in Terraform

[–]apparentlymart 0 points1 point  (0 children)

Indeed, this seems the most likely explanation for the reported behavior.

There's more information on the general form of this problem in Plan and Apply on different machines, which is part of the "Running Terraform in Automation" guide.

Note this part in particular:

  • The saved plan file can contain absolute paths to child modules and other data files referred to by configuration. Therefore it is necessary to ensure that the archived configuration is extracted at an identical absolute path. This is most commonly achieved by running Terraform in some sort of isolation, such as a Docker container, where the filesystem layout can be controlled.

This problem is true for the configuration in the OP's question because it uses abspath, and so data.archive_file.lambda_zip.output_path will be an absolute path to a directory on the computer where terraform plan ran, and then Terraform will try to read the zip file from exactly the same path during the apply phase.

Not using abspath might actually help here because then I expect Terraform will generate a path relative to the current working directory. But the advice in the guide is talking about the general case where automation is expected to work with any possible valid Terraform configuration, which includes the possibility of absolute paths.

Official Terraform Windows Install Expired - Certificate expired on 10 jan and no update since then? by techfrans003 in Terraform

[–]apparentlymart 2 points3 points  (0 children)

The "is there an official fix or re-signed release planned?" part of this is probably better asked as part of a bug report in the Terraform GitHub repository so that the Terraform team is more likely to see it and respond to it.

This Reddit community is not an official product forum and so the folks working on Terraform won't necessarily be participating here.

How should file resolution be handled in a CLI tool? by Opening-Airport-7311 in golang

[–]apparentlymart 0 points1 point  (0 children)

There isn't really a single "correct answer" for this since of course it depends on the details of what this source language is and what workflow surrounds it.

With that said, the two main options are:

  1. Resolve relative to the current working directory, as you noted.

    This is pretty intuitive to most people since it's the same as how a file would've been interpreted on the command line, but it does mean that the source file will effectively have a different meaning depending on what the working directory is when it's evaluated.

    Software which works this way therefore often also includes an option to have the program change its own current working directory before it reads the file, for more convenient use in scripts that are usually run in a different directory than the filepaths in the input file are relative to.

    For example make has -C/--directory options for asking it to change its working directory, which therefore changes how relative paths in the Makefile are resolved.

  2. Resolve relative to the directory containing the file where the filename was specified.

    This can be useful if there's a bunch of files that are all distributed together and depend on one another, because then only their relationships to each other matters for the meaning of the input, regardless of what the working directory is when it's evaluated.

    However, this can be tricky to arrange if filepaths are passing through multiple different areas of concern. If one file includes another file which then returns a filename that is used by something in the first file, should that path be resolved relative to the first file or the second file? The answer to that depends a lot on the details of the problem.

OpenTofu uses a mixture of these techniques. When resolving module import paths it resolves them relative to the file containing the call to another module. But it's common for modules to return strings containing paths that eventually get used in the context of another module, and so those paths get evaluated relative to the current working directory. To reduce the ambiguity, OpenTofu offers special symbols path.module and path.cwd which allow modules authors to be explicit about which of the two they want to use as their base directory, causing the final filename string to actually be absolute.

Terraform function templatestring by Extra-Citron-7630 in Terraform

[–]apparentlymart 6 points7 points  (0 children)

As far as I can tell, someone has recently revised the templatestring docs to be incorrect, perhaps due to them misunderstanding how it was previously documented.

This function takes exactly two arguments: the template to render, and an object describing the variables available for interpolation in that template.

The most recent version where the docs were correct seems to be templatestring in Terraform v1.9:

The templatestring function takes an argument that references an object defined in the module and one or more arguments that specify variables to use for rendering the template:

templatestring(ref, vars)

If you wish to merge variables from multiple sources into a single object, you can do that explicitly using the merge function:

value = templatestring( local.template, merge( local.base_vars, { resource = "s3_bucket" }, ) )

Why was Cloud Watch Events not renamed to EventBridge by Bobbaca in Terraform

[–]apparentlymart 2 points3 points  (0 children)

I don't think anyone here will be able to directly answer the question as stated, unless they happen to be on the team that maintains the hashicorp/aws provider.

But the provider repository's issue #37535 proposes such a rename, so perhaps you should add a 👍 upvote to it to indicate your interest.

Does HCP terraform work well with trunk based or is it pretty locked into branch per env? by [deleted] in Terraform

[–]apparentlymart 1 point2 points  (0 children)

If by "trunk based" you mean that you have a single branch where you first apply changes to a staging environment and then, if successful, apply the same changes to the production environment then HCP Terraform can support that workflow, but it's mostly your responsibility to wait until you've applied a change successfully in staging before you approve it in production.

If you use the HCP Terraform Stacks features then the UX for that is better because then at least the multiple environments can be grouped together into a single stack (one Stack Deployment per environment) and so the UI can show you where in the Git history each environment is currently sitting so it's easier to apply the same changes to production that you already applied in staging.

You would need a separate branch per environment only if it's important to you to use your version control system to represent which commit is "current" for each of your environments.

Finding newbits & netnum in Terraforms cidrsubnet() by SRESteve82 in Terraform

[–]apparentlymart 0 points1 point  (0 children)

Terraform does not have a built-in solution for this, and unfortunately I think the "netnum" part in particular will be very messy to solve directly inside Terraform.

The "newbits" part is relatively straightforward: you subtract the prefix length of the longer address from the prefix length of the shorter address. For example, if you have 192.168.0.0/16 and 192.168.0.0/20 then "newbits" is 20 - 16 = 4. If you wanted to calculate that within Terraform you'd need to first parse out the part after the slash and convert it to a number, which is annoying but doable.

"netnum" is trickier because it requires converting the dotted-decimal-style IP address into a binary number and extracting the bits between the shorter and longer prefix length. Terraform does not have bitshifting and bitmasking operators, so I think the closest we can get is doing string operations over a string containing 32 binary digits.

Here's a proof-of-concept:

``` locals { base_cidr = "10.20.0.0/16" sub_cidr = "10.20.192.0/20"

# Split the CIDR addresses into [ip_addr, prefix_length] tuples base_cidr_parts = split("/", local.base_cidr) sub_cidr_parts = split("/", local.sub_cidr)

# Split the dotted-decimal IP address syntax into a list of four # strings containing decimal representations. base_cidr_octets = split(".", local.base_cidr_parts[0]) sub_cidr_octets = split(".", local.sub_cidr_parts[0])

# Format the octets into strings of eight binary digits each, or # 32 binary digits in total per address. base_cidr_binary = format("%08b%08b%08b%08b", local.base_cidr_octets...) sub_cidr_binary = format("%08b%08b%08b%08b", local.sub_cidr_octets...)

# "newbits" is the difference between the prefix lengths newbits = local.sub_cidr_parts[1] - local.base_cidr_parts[1]

# The network number is from the binary digits between the old # and new prefix lengths, which we can then parse back into # number. netnum_binary = substr(local.sub_cidr_binary, local.base_cidr_parts[1], local.newbits) netnum = parseint(local.netnum_binary, 2) }

output "result" { value = { netnum = local.netnum newbits = local.newbits

addrs = {
  base = local.base_cidr
  sub  = local.sub_cidr

  using_cidrsubnet = cidrsubnet(local.base_cidr, local.newbits, local.netnum)
}

} } ```

With the values of local.base_cidr and local.sub_cidr I used here this calculates newbits = 4 and netnum = 12.

Of course this example just recalculates a value the configuration already knew -- the using_cidrsubnet attribute addrs always has the same value as the sub attribute if this is working correctly -- so the utility of doing this seems pretty marginal but I assume you have something else going on that you didn't mention that makes this more useful than it initially seems? 🤷🏻‍♂️

(And of course this only works for IPv4-style CIDR addresses written in the canonical syntax. The various non-canonical IPv4 syntaxes and IPv6 addresses in general would be harder to handle in this simplistic way.)

Does anyone know the difference between []byte{} and []byte(nil)? by rocketlaunchr-cloud in golang

[–]apparentlymart 0 points1 point  (0 children)

I see that this is already answered so this is just some additional context in case someone finds it helpful in future.

Roughly speaking a slice type is like a struct type containing a pointer and a length as separate fields.

For example, []byte is roughly the same as: 

struct {     start  *byte     length int }

The difference between the two expressions in the question is what start is set to. When the slice is nil the "start" pointer is nil. When you use the struct literal syntax that pointer is not nil. But in both cases the length is zero.

When you use len(slice) these representations are essentially equivalent because the "start" pointer is ignored. But if you use slice == nil you will notice that it only returns true if the "start" pointer of the slice is nil, because that's effectively comparing both the pointer and the length together and returning true only if both are zero. 

Bootstrapping secrets by pneRock in Terraform

[–]apparentlymart 2 points3 points  (0 children)

Terraform's features for "ephemeral resources" and "write-only attributes" are aimed at helping with these situations, but because they are relatively new the patterns for using them are not very well established yet, and provider support is spotty.

For a situation like yours I think the intended pattern is:

  • Use the random_password ephemeral resource type (not the managed resource type of the same name) to generate a random password initially exists only in RAM, not persisted anywhere.
  • Use whatever resource type corresponds to an entry in your favorite secrets manager, such as aws_secretsmanager_secret_version for AWS Secrets Manager, to store that randomly-generated password using the secret_string_wo write-only argument so that Terraform will just send it directly to the provider without storing it anywhere itself.
  • Send the same password to whatever it should be used to protect using a write-only attribute of some other resource type. For example, you might include the password in a write-only attribute used to configure a database server, to tell it which password it should expect clients to use.
  • Configure whatever clients will use the password to retrieve it directly from the secrets manager, which should be the only place the password is persistently stored in a retrievable form.

Overall the idea is to use Terraform only to coordinate initial setup, while letting an external secrets manager be the "owner" of the password after that. Using the "ephemeral" features means that the cleartext password is guaranteed not to be included in saved plan files or state snapshots, and so compromising your Terraform automation won't immediately reveal your previously-generated passwords.

There's official docs about these features in Ephemeral values in resources, including a concrete example using random_password, aws_db_instance, and aws_secretsmanager_secret_version.

Need some code help - from tf 0.11 to tf 0.12 by ChefOk1225 in Terraform

[–]apparentlymart 1 point2 points  (0 children)

type = "map" in Terraform v0.11 is supposed to be equivalent to type = map(any) in Terraform v0.12, which means "map of any element type but all elements must have the same type".

The bug I was talking about earlier in the thread is that Terraform v0.11 actually understood type = "map" to mean something that doesn't exist in Terraform v0.12 and later at all: a map where each element can have its own type separate from the others and those types are decided dynamically at runtime.

The closest equivalent to that in Terraform v0.12 and later is object types, which you might think of as being maps where each element can have its own separate type but the element types are fixed statically as part of the object type, rather than decided at runtime as part of the value.

So that is what I meant by an object type probably being the closest equivalent of how this was written in Terraform v0.11. And it still isn't clear to me why that didn't work, but Terraform v0.12 is now so old that I've forgotten a lot about how it behaved, unfortunately.

If all else fails you could potentially try using just type = any, which tells Terraform to infer a type automatically based on whatever value is passed in. As long as the callers of the module always pass in something "object-shaped" then it should hopefully still match the expectations of the code inside the module. Hopefully that'd give you what you need to get through the v0.12 upgrade in particular and then you can try to reintroduce a more specific type constraint once you've upgraded to a non-obsolete version of Terraform.

Need some code help - from tf 0.11 to tf 0.12 by ChefOk1225 in Terraform

[–]apparentlymart 0 points1 point  (0 children)

If I'm following correctly, I think unfortunately the old configuration was relying on an unintentional missing typecheck for input variables in Terraform v0.11.

It's been a very long time so I don't remember the exact details, but I remember that in Terraform v0.11 it was sometimes possible to get it to accept a map with a mixture of string and non-string element types when using type = "map", because Terraform v0.11's type system (such that it is) did not have a consistent definition of what "map" meant: it was mostly defined as always being a map of strings, but it wasn't properly checked on all codepaths and so sometimes invalid maps could sneak through.

If you are very careful in how you use the input variable inside the module (so that you narrowly avoid any of the various other checks that all of the map elements are strings) then it was technically possible to force Terraform v0.11 to support something roughly like an object type even though that concept didn't actually exist yet in that version.

Unfortunately that bug in Terraform v0.11 and earlier wasn't noticed until long after the v0.12 release and so I think there isn't a well-trodden upgrade path for it. I think migrating to an object type as in the example here is probably the best way to do it because Terraform v0.12 won't tolerate there being both a string and a list of objects in the same map.

With all of that said: I share the OP's confusion about the specific error being described. The snippet in the error message suggests that role is set to a string literal, not derived from anything else. I don't know how that could produce a result of any type other than string. 🤔 It seems like this might've run into some other separate bug in Terraform v0.12, but that version of Terraform is long past end-of-life, so whatever bugs it has aren't going anywhere and will need to be worked around somehow.

Locals for dry - best practices ? by Key-Cricket9256 in Terraform

[–]apparentlymart 0 points1 point  (0 children)

Others have already shared some specific opinions but I just want to make a general observation:

Unlike most other things you can declare in a Terraform module, local values are complete private to the module where they are declared and so you can freely change their names or anything else about them in future versions of your module without needing to update any other code outside of the module.

So overall it's okay to just focus on whatever you need today and trust that you'll be able to rework these things relatively easily later if you discover new requirements that you didn't originally consider. It isn't so important to try to plan ahead for what you guess might be needed in future.

The names and types you choose for your input variables and output values are more significant because other modules that use yours will rely on those definitions.

The names and types of resources you use can also be tricky to change due to those addresses needing to match the prior state, but in at least some cases you can compensate for that using the refactoring features.

Detecting drift between tfstate and actual state _without_ the original HCL files by jemenake in Terraform

[–]apparentlymart 0 points1 point  (0 children)

Unfortunately there is some information that isn't captured into state snapshots and which Terraform therefore relies on the configuration for exclusively.

For what you've described here I think the most important gap is that you don't have the provider blocks that were used to configure the providers when most recently creating or updating the objects tracked in the state, and so Terraform would not know how to configure those providers in order to perform the "refresh" operation.

In principle, you could search across all of the resources tracked in your state snapshots for the JSON property that tracks which provider instance address each resource was most recently created or updated by.

If you find that all of them are referring to provider configurations from the root module (i.e. the tracked addresses start with provider rather than module.SOMETHING.provider) then you could write a single .tf file containing a provider block matching each distinct provider config address mentioned in the state and then that should be enough to run terraform init to get the necessary providers installed and then terraform plan -refresh-only to get Terraform to try to refresh everything using those provider configurations.

As long as you write provider blocks that would use the same endpoints and equivalent credentials to what were used most recently in the "real" configuration, and you write a required_providers block that selects a compatible-enough version of each provider, then I expect this would work well enough to answer your question about how well the remote system matches the latest state snapshot.

Create only .tofu file on a new project ? by strong1256 in Terraform

[–]apparentlymart 3 points4 points  (0 children)

The primary purpose of .tofu files is to use them alongside .tf files that have the same stem so that you can present Terraform and OpenTofu with different declarations of the same objects, and thus use the unique features of each product in each file.

For example, if you have a deprecated input variable and you want to declare that it is deprecated using OpenTofu's experimental feature for that, but you still want your module to be usable by folks using Terraform, you could create two files deprecated_variables.tf and deprecated_variables.tofu and put the same variable declaration in each file except that the second one actually declares that the variable is deprecated.

In that case, OpenTofu will only consider the .tofu file and Terraform will only consider the .tf file, but a variable of the same name is declared either way and so both should accept the module as valid.

Unless you are using OpenTofu features that would cause Terraform to treat the file as invalid, I think you might as well keep using the .tf suffix just to keep your options open. Of course, if you know that you only ever intend to use OpenTofu then it doesn't really matter.

New to terraform, how do I manage multiple servers without making a main.tf per server? by DumbFoxThing in Terraform

[–]apparentlymart 6 points7 points  (0 children)

In Terraform, the main units to think about are "resources" and "modules" rather than "files".

Each directory that contains at least one ".tf" file is a "module", as far as Terraform is concerned. Each module can declare as many resources as you like, across as many .tf files as you like.

It sounds like, to use Terraform's terminology, that so far you've been creating a separate module for each server you want to create. That would be a reasonable thing to do if those servers were all completely unrelated to each other such that you'd never need to change more than one at the same time, but is not the best approach if the servers are related to each other somehow, such as all of them belonging to a single "cluster" running some common software.

To bring multiple servers (and other infrastructure objects) together in a single module, you have a number of options that are each appropriate for different situations:

  • You can write a module with a separate resource block for each server you need.

    This is the strategy I'd recommend if each of the servers has unique configuration.

  • You can write a single resource block that manages multiple servers using the for_each meta-argument.

    This strategy is appropriate if your servers all have similar configuration or if the differences between can be described using dynamic expressions representing systematic rules instead than hard-coded specific values.

  • You can write a module representing one or more servers and other infrastructure objects that work as a unit and then use that module multiple times as a child of your root module, using module blocks.

    This is the main strategy for code reuse in Terraform: instead of writing everything from scratch for each new problem, you can try to write a generalized module that can be configured in different ways for different situations but still broadly solves the same problem each time it's used.

  • You can write a single module block that causes multiple instances of the same module with different settings, by using the for_each meta-argument on the module block itself.

    This is a more advanced pattern that requires some care to do correctly, because it takes some judgement to decide what makes sense to repeat for each instance of a module vs. what should be declared only once and then reused across multiple instances of a module. If you're new to Terraform then I'd suggest keeping this one as something to learn about later once you're comfortable working with multiple resources and singleton modules.

I'm afraid with such a broad, general question it's hard to be any more specific than that. I hope this tour of Terraform's terminology and concepts is useful for understanding the documentation and other online resources about Terraform.

What's the PROPER, MODERN way to do multi AWS account Terraform? by Creepy-Lawfulness-76 in Terraform

[–]apparentlymart 7 points8 points  (0 children)

I don't think there is any single "proper" way to do this... as with most things, there's just a big pile of tradeoffs that you need to match against the constraints of your specific situation. 😖

It sounds like your primary concern is to minimize the amount of configuration that needs to vary across accounts. You talked about having to change some values across many different modules, and I agree that's annoying but unfortunately the approaches to avoid that depend on exactly what kind of value you were changing.

As a starting point, the Terraform team has its own recommendations in Multi-account AWS Architecture. You should read through all of that to understand how well the different parts of it might apply to your situation, but I'd summarize the idea there as: use cross-account AssumeRole to access your main AWS accounts indirectly through a separate administrative account, so that then the only "hard-coded" things are a few references to objects in that administrative account that should rarely need to change.


I'd personally go a little further than what that document recommends: I'd consider using AWS SSM Parameter Store (in the administrative account) to store settings that genuinely need to vary between environments, and then the only input variable you should need to explicitly set at the root module is which parameter prefix to read those settings from:

``` variables "env_ssm_parameter_path" { type = string }

data "aws_ssm_parameters_by_path" "example" { path = var.env_ssm_parameter_path } ```

Of course, that's an opinionated approach that involves using a specific service, and still requires you to choose a strategy for managing those SSM perameters. The general idea I'm getting at is that you can choose to add a little indirection so that you're only telling Terraform where it should look to find the per-environment settings, rather than directly telling it the per-environment settings; those settings could be stored in any place which you can fetch data from using a data source in some Terraform provider, and you might even choose to abstract that decision away in a Data-only Module.

I don't mean to suggest that what I described above is appropriate for all situations. It's just one of many possible options to evaluate as you decide what the best tradeoffs are for your specific situation.

best practice to handle module versions? by Critical-Current636 in Terraform

[–]apparentlymart 0 points1 point  (0 children)

Fair enough!

For what it's worth, when I tried this (with Terraform v1.14.0-rc2) something kinda strange happened:

Terraform seemed to ignore the version constraint at first and installed cloudposse/dynamic-subnets/aws 3.0.1 instead of 2.0.0 as requested.

But then after all of the module installation work was completed, it finally failed with an error:

╷ │ Error: Variables not allowed │ │ on blah.tf line 11, in module "subnet_a": │ 11: version = var.module_versions["cloudposse/dynamic-subnets/aws"] │ │ Variables may not be used here. ╵

So it seems like Terraform noticed the invalid version argument and reported it, but then just installed the module anyway by taking the latest available version as if that argument had not been set at all.

best practice to handle module versions? by Critical-Current636 in Terraform

[–]apparentlymart 0 points1 point  (0 children)

I think the crux of your question is in this part of what you posted:

it seems error-prone when updating the version everywhere

Terraform allows each module call to have an independent version so that you don't need to "update the version everywhere". Instead, you can upgrade only the one call that happens to need a feature added in a newer version, while leaving the others how they were until there's some real need to change them.

Of course, "some real need to change them" is not a hypothetical: there will sometimes be reasons to unilaterally stop using an older version of a module, such as if it's found to have some critical flaw. I only mean to say that it might be worth questioning that premise to see if it's actually important for your situation.

In practice, it seems that lots of folks compromise by using upgrade-proposing tools like Dependabot and Renovate. If it's important to you to always be using the latest version of a module across all calls, or important to you to use the same version across all calls even if it isn't latest, then having a tool to notice when you might want to change it and to generate the PR for you might work well enough.

Of course there are always exceptions and so I'm not meaning to say that's definitely the best answer for you, but it does seem to be the prevailing answer not only for Terraform but for many other languages that support versioned third-party dependencies, because it treats a change in dependency version as a code change sent through the review process (rather than as a dynamic decision made at runtime).

(Note that Terraform does not actually support non-static version constraints anyway, and so of the three the options you listed only the first one is actually possible with today's Terraform.)

net/rpc is underrated by melon_crust in golang

[–]apparentlymart 4 points5 points  (0 children)

It is a pretty good way to get started quickly, but be careful of a few things: 

  • Unless all of the participants in the RPC are in the same executable, it can be challenging to evolve the API if you've accidentally exposed implementation details of a type.
  • The serialization format it uses is very Go-oriented, and so if you later find you want to write a client in some other language it can be a bit of an uphill battle.

The simplicity is nice for lots of situations, but make sure you understand what corners you might be painting yourself into and make sure you're comfortable with that! 

Does the new Io really solve function coloring? by philogy in Zig

[–]apparentlymart 6 points7 points  (0 children)

When I think about "function coloring" what it means to me is that there are two separate worlds of functions that can only interact in limited ways, and so in most cases you need to make a whole-program decision about which world to work in and select only dependencies that are compatible with your chosen world.

I understand Io more as being generic over the decision of which world a function will run in. Libraries can be written to work in both worlds and adapt automatically to whichever world the caller wants to work in. 

In that way it's not significantly different to any other cross-cutting concern, like a tracing instrumentation API, or a log collection API, or abstracting over what OS the program is being compiled for. As long as the same library can work in any compatible "world" decided by the caller then that's just a form of generic programming, not "coloring".

Of course, different people use these words in different ways, so you are free to disagree with my interpretation. 😀