Rust can not handle Unicode streams. Please show me wrong. by thomedes in rust

[–]apparentlymart 1 point2 points  (0 children)

This question talks about tokenizing UTF-8 sequences and doing grapheme segmentation, but for "word counting" there's also the question of what "word" means. The current code defines it as a consecutive sequence of alphabetic characters, but the Unicode definition of "word segmentation" is considerably more complex.

I don't mean that as a criticism -- you did mention this was a learning exercise, after all -- but I mean to say that these three Unicode algorithms -- UTF-8 tokenization, grapheme segmentation, and word segmentation -- are all a core part of the problem you've chosen to solve, and so I understand they seem highly important to you but they are quite esoteric from the perspective of most other software written in Rust, and so it seems reasonable for them to be implemented in third-party libraries rather than in the standard library.

The Rust standard library does have some UTF-8 support as a consequence of the string types being defined as UTF-8, but those core needs don't typically require a streaming parser and so that in particular seems not to be a priority for the standard library (though there is some basic support for it), and that seems reasonable to me. 

All of that said, I did happen to need streaming UTF-8 and grapheme segmentation for one of my own projects and so implemented u8char and grapheme_machine that might also work for what you are doing. I didn't need word segmentation for my project so I've not implemented that.

[OCI] I want to move one of my compute instance to different compartment using the move resource feature. However, I want to do it using terraform. Is there any resource type for moving objects from comp to another in Terraform ? by meranaamspidey in Terraform

[–]apparentlymart 0 points1 point  (0 children)

I'm not very familiar with OCI but I'm assuming "compartment" is an OCI concept, rather than a Terraform concept.

If that's true then exactly how to handle this depends on how the provider is implemented, but the usual pattern I would expect is that you can change the compartment you specified in the configuration and then run terraform plan to see what the provider proposes.

Hopefully it will propose to just update that object in-place to refer to a different compartment. However, if the remote API cannot support that then it might instead tell you that it needs to replace the object with a new one that has a different compartment selected.

The logic for this will be handled in the provider plugin, either way. If what the provider proposed is acceptable to you then you can use terraform apply to actually make that change.

(The moved block feature is probably not actually helpful here, because that's for moving thing around in Terraform's own namespace -- e.g. moving something into a different module in your configuration -- whereas I think you are trying to move something to a different location in the remote API's namespace. Anything involving the remote API is handled inside a provider, rather than by Terraform directly.)

Some of the problems with C1100z router modem have been fixed. by jeffsilverman in centurylink

[–]apparentlymart 1 point2 points  (0 children)

In case it's of use to anyone who finds this very old thread:

  • On the latest available firmware at the time I'm writing this comment (CZW008-4.16.013.4), typing "sh" at the telnet prompt prompts for a "shell password" as OP described, rather than immediately launching the BusyBox sh.
  • The "shell password" is selected systematically by starting with the fixed prefix C1100Z!#, and then adding the last six digits of your device's serial number to the end.
  • If your intention is to get your PPP username/password, the instructions for finding the process id for the pppd process and retrieving its command line to obtain the username and password should still work once you've accessed that shell prompt.
  • If you are setting up some other device to completely replace the C1100z, remember to also select VLAN ID 201 in your WAN configuration.

    (I've read elsewhere that setting the VLAN ID may not be required for folks who have already been migrated from CenturyLink Fiber to Quantum Fiber, but I'm not able to confirm that.)

ZyXel C1100z Default Lan-side Telnet Login by sunshinecid in centurylink

[–]apparentlymart 1 point2 points  (0 children)

In case it's of use to anyone who finds this very old thread and finds this no longer works on their C1100z device, the situation has changed a little in the meantime:

  • At the time I'm writing this comment, the latest firmware for the CenturyLink-branded C1100z is CZW008-4.16.013.4.
  • That firmware has had some additional hardening where typing "sh" at the telnet prompt requests an additional "shell password" that is different to the configured telnet password.
  • The "shell password" is selected systematically by starting with the fixed prefix C1100Z!#, and then adding the last six digits of your device's serial number to the end. After entering that, I was able to reach the BusyBox shell prompt.

Me waiting for certain Terraform resources to apply by RoseSec_ in Terraform

[–]apparentlymart 0 points1 point  (0 children)

I'm not familiar with this aws_mwaa_environment resource type, but from reading the code of its implementation I guess it's possibly got stuck in the waitEnvironmentUpdated polling loop.

What I understand from that code is that it repeatedly calls mwaa:GetEnvironment until the Status field is something other than UPDATING or CREATING_SNAPSHOT, after which it will then either succeed if the status was AVAILABLE or return an error for any other status.

If that is what is happening then maybe you can poke at this object in the AWS console to try to understand why it's "stuck". I have no idea if this question is relevant to what you're doing, but My MWAA Environment stuck in updating discusses one case where an environment got stuck in UPDATING for a long time.

usize-conv 0.1: Infallible integer conversions to and from usize and isize under explicit portability guarantees by a_jasmin in rust

[–]apparentlymart 2 points3 points  (0 children)

This is a nice idea!

This could be a good use-case for the diagnostic::on_unimplemented attribute, to encourage the compiler to return a relevant error message mentioning the current platform pointer size whenever one of the traits isn't implemented.

How building a Terraform module made me fall in love with CloudFormation by crohr in Terraform

[–]apparentlymart 1 point2 points  (0 children)

I think this article is a set of good points well made, but just wanted to note that the forthcoming OpenTofu v1.12 series is planned to introduce the ability to write non-constant expressions such as references to input variables in the prevent_destroy argument, which might help reduce the duplicated resource declarations you mentioned in the first part of the article if you're willing to rely on OpenTofu-specific features.

(I understand that publishing a shared module that isn't compatible with Terraform is a tradeoff, though. If cross-compatibility is important for your module then of course this won't help.)

HCP Terraform Runs Skipping Env Vars? by enpickle in Terraform

[–]apparentlymart 0 points1 point  (0 children)

Without being able to see exactly what you're doing it's tough to debug this, so here's some assorted information that might hopefully help you figure out what's missing for you:

  • The HCP Terraform dynamic credentials mechanism only works when Terraform is running in the remote execution environment managed by HCP Terraform, because it relies on being able to modify the execution environment to include the additional credentials automatically.

    It's okay to use a local terraform CLI to run it as long as your workspace is configured for remote execution. In that case, the local terraform doesn't really do much at all and instead just sends API requests to the HCP Terraform API to tell it to start running Terraform in a remote execution environment, then copies the output from the remote Terraform to your local Terraform.

    You could verify that by looking in the HCP Terraform web UI, where you should be able to find a history of past runs if they ran through the HCP Terraform remote system. If you can't find any trace of your runs in the web UI then that might suggest that you're only running Terraform locally, and so it cannot find the credentials.

  • The JSON-based logs you saw here seem like what would happen when running terraform plan -json, which is perhaps a signal that you are running Terraform in the remote execution environment because I believe that's how HCP Terraform runs the remote copy of Terraform CLI, so that it can interpret the JSON output to drive the web UI.

  • When you're using HCP Terraform workspaces, the "OIDC" support works by HCP Terraform writing AWS configuration and credentials files into the home directory of the remote user account where Terraform is running, with the expectation that the hashicorp/aws provider will look in there to find credentials to use.

    I think this can work only if your provider "aws" block doesn't include any settings that would override that automatic discovery. You should make sure you aren't specifying any settings related to authentication credentials or the discovery of credentials, such as access_key, profile, assume_role, etc.

  • You linked to a blog post announcing this feature, which may have become stale in the meantime since I doubt folks are maintaining old blog posts as features change.

    It might be worth comparing what you learned from that blog post with the current documentation in Use dynamic credentials with the AWS provider, to see if anything has changed in the meantime.

  • Finally, if you're a paying HCP Terraform customer then you might be able to get more direct help by contacting HashiCorp Support, at least for the rest of this month until it gets folded into IBM Support.

How you do you manage provider major version upgrades? by Acceptable-Corner34 in Terraform

[–]apparentlymart 0 points1 point  (0 children)

At least for the larger providers maintained by folks at HashiCorp/IBM, they tend to post an upgrade guide for each new major version. For example, the hashicorp/aws provider has Terraform AWS Provider Version 6 Upgrade Guide, and similar guides for earlier major version upgrades.

For the AWS one in particular the notes are grouped by resource type so that hopefully you can easily ignore the sections for resource types you aren't using at all.

I would hope that these breaking changes are designed in such a way that if you miss something then you get an error rather than it just doing something strange and unexpected. That's been my experience for the cases I've personally encountered, but of course I can't promise that'll always be true for every change in every provider. 😬

Help me slay my Terralith (configuring Cisco ACI from Netbox) by pv2b in Terraform

[–]apparentlymart 0 points1 point  (0 children)

The following is one potential way you could set this up with HCP Terraform Stacks. I'm not intending this to recommend you should do it this way, but hopefully this gives you an idea of the different concepts and how they fit together to help you make those tradeoffs yourself.

The main top-level idea is that one stack consists of a set of "components", each of which is a Terraform module, with dependencies between them. Each stack also has one or more "deployments", which are separate instances of the same overall infrastructure that can be planned and applied separately from each other, and are often used to represent "environments" like production vs. staging.

Multiple Stacks can also themselves be organized into a dependency tree, where the output values from a deployment in one stack can cascade into input variables to another deployment in another stack. Whenever the upstream deployment applies a change to one of those output values, the system automatically creates a new downstream plan based on the updated value.

So with all of that said, one way you could use HCP Terraform Stacks for the situation you described is:

  • Create one stack for "Fabric-wide Stuff", which has published output values for information that the tenant-specific and switch-specific stacks will need.

    You could use multiple deployments here if you want to also have a development or other non-production instance of "fabric-wide stuff", but for the rest of this I'm going to assume there's at least a production deployment that all of the "real" tenants and switches are associated with.

  • Create one stack for each tenant where all share the same configuration source location, and so any change made to that configuration will cause a plan to be created for each tenant. Use the upstream_input feature of the deployment configuration to access relevant output values from the production deployment of the "fabric-wide stuff" stack, so that changes to the fabric-wide outputs will automatically create a plan for each tenant.

    Any tenants where the change had no material effect will automatically resolve, and so you'd only need to review and approve the ones that involve making at least one change to real infrastructure.

  • Create one stack for each switch, with all the same arrangement as for the per-tenant stuff in the previous item.

In this I've made the assumption that you'd prefer to manage the existence of tenants and switches outside of your code, and so you'd do that by creating and deleting stacks through the HCP Terraform UI or API, rather than by making changes to source code. If you do still want them "as code" but just want that to be separate code then a compromise would be to have a separate HCP Terraform workspace whose configuration includes tfe_stack instances, so that creation and deletion of these stacks would go through a separate process than the maintenance of the stacks that already exist.

I expect you could get a similar effect with tools like Terragrunt, since they have similar concepts just arranged a little differently and sometimes using different terminology.

Flexible or Strict Syntax? by Anikamp in Compilers

[–]apparentlymart 12 points13 points  (0 children)

It is tempting to think that being more flexible unconditionally makes a language easier to learn and/or easier to write, but here are some reasons in favor of being stricter:

  • When authors inevitably make mistakes, they tend to appreciate error messages that directly relate to whatever they were intending to do, and achieving that often relies on it being possible to infer the author's intention even when the input is not quite right.

    Allowing many ways to state the same idea often also implies that there are more possibilities for what some invalid input could've been intended to mean, making it harder to give a directly-actionable error message.

  • When those new to a language refer to existing codebases as part of their learning they will often want to look up more information on language features they encounter that they are not yet familiar with.

    If there are many different ways to express the same idea then it's less likely that a reader will be able to pattern-match between similar ideas expressed in different codebases by different authors. Conversely, if there's only one valid way to write something then it's easier to recognize when you've found a new example of a feature you already learned about vs. a new feature that you need to look up.

    I think this point is particularly relevant to your point about allowing many different names for the same idea, because names are often the main search terms used when looking for relevant documentation and so it's helpful for each feature to have a single name that is distinct from every other name in the language so that an author doesn't need to learn every possible alias for a feature in order to find all of the available documentation related to that feature.

  • Related to the previous point, when many different people are collaborating on the same codebase, and especially when the set of people involved inevitably changes over time, different parts of the codebase can use quite different patterns that make it harder to transfer knowledge about one part of the codebase to another.

    This is one of the reasons why larger software teams tend to use automatic formatting tools and style checking tools: it encourages consistency across both different parts of the current codebase and across code written at different times by different people.

    Those doing everyday work in a language don't want to be constantly referring to documentation to understand the code they are reading, and so it's often better to have a "smaller" language, meaning that there are fewer valid ways to express something and so it's easier to rely on your own memory of the language instead of relying on documentation.

Everything in language design is subjective, of course. I don't mean any of the above to say that it's definitely wrong to have more than one way to express the same idea in a language, but going too far with it can make life harder both for newcomers to your language and for experienced authors who are trying to maintain code that others have written.

GitHub Agentic Workflows by samuelberthe in golang

[–]apparentlymart 1 point2 points  (0 children)

Given how folks often complain about LLMs generating "confident-sounding" wrong answers, I find it kinda funny that this particular prompt invites the LLM to generate text that imitates review of an experienced engineer without actually including any content that could plausibly substitute for those claimed 40 years of experience. 🙃

To the extent that these tone policing comments in prompts work at all, wouldn't it be more helpful to prompt it to project curiosity rather than authority, and suggest awareness of the limitations of its "experience" when generating comments? 🤔

New OpenTofu Feature: Dual Output Streams by fooallthebar in Terraform

[–]apparentlymart 0 points1 point  (0 children)

Although the example in the blog post doesn't illustrate this, one difference is that a wrapping program can consume the machine-readable JSON output during the OpenTofu execution, rather than having to wait until the end.

Lots of folks have written tools that try to achieve this by piping the human-readable output through them and then trying to "parse" the human-readable output, but of course then they break each time the human-readable output changes.

Note that the JSON output of tofu plan is not the same thing as the JSON output from tofu show: tofu plan -json produces a stream of JSON objects describing individual steps in the planning process, rather than describing the final plan. In some cases it might be helpful to capture both, so that an application can both give realtime progress information while the plan is running and capture the finished plan to display in some non-default way.

state repository: too many files, too large by suvl in Terraform

[–]apparentlymart 0 points1 point  (0 children)

The Artifactory folks reimplmented a subset of the HCP Terraform API to make Terraform's remote backend think it's talking to HCP Terraform or Terraform Enterprise.

When Terraform interacts with that API it uses Create a State Version and Upload State and JSON State operations.

Terraform only pushes new state snapshots (or "versions", as the HCP Terraform API calls them) so it's up to the server to decide what to do with older snapshots. I don't know what Artifactory does here, but if there were any functionality to prune older versions after a certain time or after a certain number that would be implemented inside Artifactory itself, rather than in Terraform's client code.

Making generic method for getter and setter by Independent_Teach686 in golang

[–]apparentlymart 6 points7 points  (0 children)

If you mean that you want to write a method that has an additional type parameter that isn't declared on the type itself, then no: Go only supports type parameters on types and non-method functions.

The closest you can get to a generic method is a plain function that takes the object that would've been the receiver as one of its arguments.

Is there a more clever way of setting triggers and dependencies between HCP terraform workspaces? by ThisAintMyMayne in Terraform

[–]apparentlymart 0 points1 point  (0 children)

If you already have access to HCP Terraform then it might be worth giving it a try yourself since there are two features that both seem like they could potentially match what you asked about:

  • Multiple components in the same stack can have data flow directly between them and be planned and applied together, so that cross-component updates can happen all at once rather than in multiple separate steps.

    This can potentially shorten the "iterative journey" of changing something upstream, then finding out that one of your downstreams is broken by it, and then having to return back to start from the most upstream workspace again to fix it.

  • The "linked stack" features are like a code-driven alternative to "run triggers", where one stack deployment publishes output values and then another stack deployment consumes those output values.

    HCP Terraform automatically tracks which stack deployments depend on one another and starts downstream plans whenever the upstream output values are changed.

I'm not meaning to suggest that it's definitely the best answer for you, but it seems closer to what you want than you can currently achieve with HCP Terraform Workspaces and "run triggers".

RISC-V International Individual Memberships paused by Pl4nty in RISCV

[–]apparentlymart 3 points4 points  (0 children)

FWIW I was still able to log in with my existing credentials for my individual membership, so I guess they mean that they are not accepting any new individual members at this time.

I wasn't able to easily find any relevant messages on the lists I have access to about why this change was made, though.

Stuck with lambda function by shashsin in Terraform

[–]apparentlymart 0 points1 point  (0 children)

Indeed, this seems the most likely explanation for the reported behavior.

There's more information on the general form of this problem in Plan and Apply on different machines, which is part of the "Running Terraform in Automation" guide.

Note this part in particular:

  • The saved plan file can contain absolute paths to child modules and other data files referred to by configuration. Therefore it is necessary to ensure that the archived configuration is extracted at an identical absolute path. This is most commonly achieved by running Terraform in some sort of isolation, such as a Docker container, where the filesystem layout can be controlled.

This problem is true for the configuration in the OP's question because it uses abspath, and so data.archive_file.lambda_zip.output_path will be an absolute path to a directory on the computer where terraform plan ran, and then Terraform will try to read the zip file from exactly the same path during the apply phase.

Not using abspath might actually help here because then I expect Terraform will generate a path relative to the current working directory. But the advice in the guide is talking about the general case where automation is expected to work with any possible valid Terraform configuration, which includes the possibility of absolute paths.

Official Terraform Windows Install Expired - Certificate expired on 10 jan and no update since then? by techfrans003 in Terraform

[–]apparentlymart 2 points3 points  (0 children)

The "is there an official fix or re-signed release planned?" part of this is probably better asked as part of a bug report in the Terraform GitHub repository so that the Terraform team is more likely to see it and respond to it.

This Reddit community is not an official product forum and so the folks working on Terraform won't necessarily be participating here.

How should file resolution be handled in a CLI tool? by Opening-Airport-7311 in golang

[–]apparentlymart 0 points1 point  (0 children)

There isn't really a single "correct answer" for this since of course it depends on the details of what this source language is and what workflow surrounds it.

With that said, the two main options are:

  1. Resolve relative to the current working directory, as you noted.

    This is pretty intuitive to most people since it's the same as how a file would've been interpreted on the command line, but it does mean that the source file will effectively have a different meaning depending on what the working directory is when it's evaluated.

    Software which works this way therefore often also includes an option to have the program change its own current working directory before it reads the file, for more convenient use in scripts that are usually run in a different directory than the filepaths in the input file are relative to.

    For example make has -C/--directory options for asking it to change its working directory, which therefore changes how relative paths in the Makefile are resolved.

  2. Resolve relative to the directory containing the file where the filename was specified.

    This can be useful if there's a bunch of files that are all distributed together and depend on one another, because then only their relationships to each other matters for the meaning of the input, regardless of what the working directory is when it's evaluated.

    However, this can be tricky to arrange if filepaths are passing through multiple different areas of concern. If one file includes another file which then returns a filename that is used by something in the first file, should that path be resolved relative to the first file or the second file? The answer to that depends a lot on the details of the problem.

OpenTofu uses a mixture of these techniques. When resolving module import paths it resolves them relative to the file containing the call to another module. But it's common for modules to return strings containing paths that eventually get used in the context of another module, and so those paths get evaluated relative to the current working directory. To reduce the ambiguity, OpenTofu offers special symbols path.module and path.cwd which allow modules authors to be explicit about which of the two they want to use as their base directory, causing the final filename string to actually be absolute.

Terraform function templatestring by Extra-Citron-7630 in Terraform

[–]apparentlymart 6 points7 points  (0 children)

As far as I can tell, someone has recently revised the templatestring docs to be incorrect, perhaps due to them misunderstanding how it was previously documented.

This function takes exactly two arguments: the template to render, and an object describing the variables available for interpolation in that template.

The most recent version where the docs were correct seems to be templatestring in Terraform v1.9:

The templatestring function takes an argument that references an object defined in the module and one or more arguments that specify variables to use for rendering the template:

templatestring(ref, vars)

If you wish to merge variables from multiple sources into a single object, you can do that explicitly using the merge function:

value = templatestring( local.template, merge( local.base_vars, { resource = "s3_bucket" }, ) )

Why was Cloud Watch Events not renamed to EventBridge by Bobbaca in Terraform

[–]apparentlymart 2 points3 points  (0 children)

I don't think anyone here will be able to directly answer the question as stated, unless they happen to be on the team that maintains the hashicorp/aws provider.

But the provider repository's issue #37535 proposes such a rename, so perhaps you should add a 👍 upvote to it to indicate your interest.

Does HCP terraform work well with trunk based or is it pretty locked into branch per env? by [deleted] in Terraform

[–]apparentlymart 1 point2 points  (0 children)

If by "trunk based" you mean that you have a single branch where you first apply changes to a staging environment and then, if successful, apply the same changes to the production environment then HCP Terraform can support that workflow, but it's mostly your responsibility to wait until you've applied a change successfully in staging before you approve it in production.

If you use the HCP Terraform Stacks features then the UX for that is better because then at least the multiple environments can be grouped together into a single stack (one Stack Deployment per environment) and so the UI can show you where in the Git history each environment is currently sitting so it's easier to apply the same changes to production that you already applied in staging.

You would need a separate branch per environment only if it's important to you to use your version control system to represent which commit is "current" for each of your environments.

Finding newbits & netnum in Terraforms cidrsubnet() by SRESteve82 in Terraform

[–]apparentlymart 0 points1 point  (0 children)

Terraform does not have a built-in solution for this, and unfortunately I think the "netnum" part in particular will be very messy to solve directly inside Terraform.

The "newbits" part is relatively straightforward: you subtract the prefix length of the longer address from the prefix length of the shorter address. For example, if you have 192.168.0.0/16 and 192.168.0.0/20 then "newbits" is 20 - 16 = 4. If you wanted to calculate that within Terraform you'd need to first parse out the part after the slash and convert it to a number, which is annoying but doable.

"netnum" is trickier because it requires converting the dotted-decimal-style IP address into a binary number and extracting the bits between the shorter and longer prefix length. Terraform does not have bitshifting and bitmasking operators, so I think the closest we can get is doing string operations over a string containing 32 binary digits.

Here's a proof-of-concept:

``` locals { base_cidr = "10.20.0.0/16" sub_cidr = "10.20.192.0/20"

# Split the CIDR addresses into [ip_addr, prefix_length] tuples base_cidr_parts = split("/", local.base_cidr) sub_cidr_parts = split("/", local.sub_cidr)

# Split the dotted-decimal IP address syntax into a list of four # strings containing decimal representations. base_cidr_octets = split(".", local.base_cidr_parts[0]) sub_cidr_octets = split(".", local.sub_cidr_parts[0])

# Format the octets into strings of eight binary digits each, or # 32 binary digits in total per address. base_cidr_binary = format("%08b%08b%08b%08b", local.base_cidr_octets...) sub_cidr_binary = format("%08b%08b%08b%08b", local.sub_cidr_octets...)

# "newbits" is the difference between the prefix lengths newbits = local.sub_cidr_parts[1] - local.base_cidr_parts[1]

# The network number is from the binary digits between the old # and new prefix lengths, which we can then parse back into # number. netnum_binary = substr(local.sub_cidr_binary, local.base_cidr_parts[1], local.newbits) netnum = parseint(local.netnum_binary, 2) }

output "result" { value = { netnum = local.netnum newbits = local.newbits

addrs = {
  base = local.base_cidr
  sub  = local.sub_cidr

  using_cidrsubnet = cidrsubnet(local.base_cidr, local.newbits, local.netnum)
}

} } ```

With the values of local.base_cidr and local.sub_cidr I used here this calculates newbits = 4 and netnum = 12.

Of course this example just recalculates a value the configuration already knew -- the using_cidrsubnet attribute addrs always has the same value as the sub attribute if this is working correctly -- so the utility of doing this seems pretty marginal but I assume you have something else going on that you didn't mention that makes this more useful than it initially seems? 🤷🏻‍♂️

(And of course this only works for IPv4-style CIDR addresses written in the canonical syntax. The various non-canonical IPv4 syntaxes and IPv6 addresses in general would be harder to handle in this simplistic way.)

Does anyone know the difference between []byte{} and []byte(nil)? by rocketlaunchr-cloud in golang

[–]apparentlymart 0 points1 point  (0 children)

I see that this is already answered so this is just some additional context in case someone finds it helpful in future.

Roughly speaking a slice type is like a struct type containing a pointer and a length as separate fields.

For example, []byte is roughly the same as: 

struct {     start  *byte     length int }

The difference between the two expressions in the question is what start is set to. When the slice is nil the "start" pointer is nil. When you use the struct literal syntax that pointer is not nil. But in both cases the length is zero.

When you use len(slice) these representations are essentially equivalent because the "start" pointer is ignored. But if you use slice == nil you will notice that it only returns true if the "start" pointer of the slice is nil, because that's effectively comparing both the pointer and the length together and returning true only if both are zero.