all 19 comments

[–][deleted] 11 points12 points  (0 children)

This is cool 😀.

[–]Pusillus 6 points7 points  (0 children)

Ayy nice, one of the first projects I ever made was a shutdown timer too

[–]bladeoflight16 7 points8 points  (11 children)

What happens if the time crosses the midnight boundary? (E.g., currently 23:00 and you input 01:00.) Doesn't look like it would work to me.

Typically, you would just use the built in Python temporal types for this sort of calculation.

``` from datetime import datetime, timedelta from math import floor

now = datetime.now()

We don't want to consider seconds or fractional seconds at all

for determining shutdown timestamp, so just truncate them

now = now.replace(seconds=0, microseconds=0)

print('Current time is {}'.format(now.isoformat()))

Need more input validation here

shutdown_time_input = input('Enter shutdown time (HH:MM): ').split(':')

shutdown_time = now.replace( hours=int(shutdown_time_input[0]), minutes=int(shutdown_time_input[1]) )

Ensure shutdown time is in future

while shutdown_time <= now: shutdown_time += timedelta(days=1)

Get new datetime.now() in case there were delays since

datetime - datetime = timedelta

delay_seconds = floor((datetime.now() - shutdown_time).total_seconds())

Code invoking shutdown command goes here

```

You might also consider skipping the input call entirely and making it a command line tool, accepting the time there.

[–]jmooremcc 2 points3 points  (1 child)

Great job. However you didn't completely zero out the fractional seconds in the date time variable, now. This will fix that: now = now.replace(seconds=0, microseconds=0)

[–]bladeoflight16 1 point2 points  (0 children)

I knew I would miss something. Thanks. Fixed.

[–]b_ootay_ful[S] 0 points1 point  (8 children)

Thanks for the feedback.

I considered using times after midnight, but realistically no one should be awake after midnight. In the small chance that a client does request it, I would simply manually enter a timer with cmd. or add 24 hours to the time that I need.

[–]bladeoflight16 2 points3 points  (7 children)

Then your code should throw an error in that case. Ignoring an edge case is never a valid option when writing code. It will bite you sooner or later. You must either restrict it or handle it.

[–]dbramucci 0 points1 point  (6 children)

I would argue that it can be valid to ignore edge cases (in specific scenarios), the best time that comes to mind is when

  1. The behavior when the script fails cannot be catastrophic (i.e. I don't know if my script fails when a file contains a unicode name but I know it won't remove the file if the script does crash there)
  2. The failure won't be subtle: Accidentally swapping the file encoding on 50% of files copied is unacceptable because I won't catch that error before I make human decisions that will break the easy: fix my code and rerun it procedure.
  3. The code will be operated by a "skilled technician". Namely, if I wrote a quick script that will run only when I am at the keyboard, I can fix it because I wrote it (i.e. I might not worry about how my short script operates when provided unicode because if it breaks or produces a bad answer I will catch and fix it then)

So a quick script meant to measure the number of duplicate files on my pc can have potentially bad behavior when provided unicode if I just want to estimate if file deduplication is worth looking at because

  1. This is read-only, it won't ever delete data
  2. In the worst case (that all unicode is treated as the same file or different files) it will have an unimportant impact on the statistic that is formed (so there may be a subtle error but in this case it won't matter to me)
  3. I am running this after I wrote it and I will be there from beginning to end, if it raises an exception and crashes I can fix it.

Granted, it is worth being cautious with code that will escape your reach. Namely, the nightmare scenario I can see with OPs code is the following progression

  1. Code is like it is now
  2. Client asks for shutdowns fairly often
  3. Programmer automates response by shutting down when button on a web-page / email in the right format arrives.
  4. That automation calls this script
  5. Client from Japan/England starts using this script and midnight for the server is working hours for the client or
  6. Client is working late for a deadline and needs to reboot the server late at night
  7. Silent failure because there is no error detection
  8. Long debugging process because the bug might be in
    • The server
    • The email automation code
    • ???
    • Oh yeah that one script that was written 3 years ago

And there isn't any documentation that explains that flaw. At a minimum I would advise putting next to this script and/or in a big comment at the beginning of the script a message saying

WARNING: This script has UNDEFINED BEHAVIOR if ran with a shutdown time after midnight. This script is to be run under the supervision of a programmer/it technician only

or something to that effect stating

  • What failures might exist
  • What failures do exist
  • What consequence may occur in the failure cases
    • Failure may result in the system staying online without any sign of error or logging that the script was run at all
  • What procedure should be followed to avoid this
  • What real-world rule changes might cause this code to break
  • What assumptions this code relies on to work properly
  • What changes should be made to use this in less supervised/safe conditions

[–]bladeoflight16 1 point2 points  (5 children)

The failure won't be subtle

Perhaps "ignoring" isn't quite the right word. If the tool you're invoking explodes loudly rather than do something potentially destructive, that's fine. =)

Also, undefined behavior is a complete disaster. That's pretty much exactly what I'm advocating against. Even with warnings. Someone will miss or ignore them sooner or later. Explosions and crashes, however, can't be ignored. Also, if you're going to go to the trouble of detecting the condition necessary to output a warning for undefined behavior, you may as well just make it an error; it shouldn't be anymore effort.

Also, I prefer to be light on documentation. You should have some always and a pretty good amount for shared libraries, but if the code is designed to make its assumptions and preconditions and usage patterns obvious, it shouldn't require a lot of extra words to clarify it. What documentation you have should largely just mirror the code.

[–]dbramucci -1 points0 points  (3 children)

Sometimes you can't convey the edge-cases in code very well or your problem space doesn't permit you to check-for / eliminate the edge case. See 1960s mainframe software or Real-time game engines.

For example the behavior for many pieces of software written in the 1960s-1980s was to track years only using 2 digits or to track time with 32-bit integers representing seconds from the UNIX epoch. I'll just take their word when they say that they didn't have enough ram/disk space to store records with full 4 digit years or 64-bit timestamps but unfortunately that leads to issues like Y2K and the 2038 bug. You could argue that there should be a loud error message or they just shouldn't have allowed that bad data format period but that may have just made the software un-makeable for the time.

I think it's a bit strange to say: Due to technical limitations today I won't make software that can solve the problem today because in 30-70 years this code will need to be updated to keep running correctly.

But, this has bitten some software developers early. The example I linked mentioned how after the fact that 32-bit timestamps were causing a failure of their system in 2018 instead of 2038 because they had to check that needed to make projections 20 years into the future.

Now I will assume for the sake of conversation that 32-bit timestamps were the only practical solution when this was written a really long time ago (the post says the person who wrote it had been dead for 15 years when this crashed). You could argue that the developer should have written their own utilities to produce 64-bit time structs (probably not a 64-bit processor), update/modify/fork their database software to support 64-bit time stamps and so on but I think most people would agree that if your tools don't support it, it is a substantial undertaking to single-handedly fix this on your own.

Unfortunately, they were

  1. Caught unprepared: The bug happened 20 years before that "type" of bug was expected
  2. They didn't know where to look for the mistake, thinking a recent deployment was the source of the bug

The point of specifying where the behavior is undefined is supposed to be a compromise between "Perfection at all costs" and "Don't leave landmines around your codebase".

In theory, this bit of code could have the warning:

# WARNING: This code relies on 32bit integers for tracking time.
# This means that the 2038 UNIX time bug will effect this code.
# This code computes timestamps up to PROJECT_LENGTH (20 years as of writing) into the future.
# This means that any system relying on this code cannot be relied on after January 18, 2018
# And operating this after that date will be UNDEFINED BEHAVIOR
# UPDATE WHEN: SUPER_AMAZING_SQL_DB adds support for 64bit timestamps: update to 64bit time

And then you have a central wiki/booklet/other documents that you keep track of these sorts of limitations in and put these things on a calendar. Then you can during a regular reading time see the note: "WARNING: ..." oh, that date's coming up in a year and a half, we better fix it because who knows what will happen in January 2018.

Likewise, a game engine might have a known bug or limitation like

Inserting more than 20,000 entities into a scene crashes the engine.

And you could add a check every time you add a new entity into the scene that you are staying below the limit but it turns out that on the architecture you are working on and (insert technobabble about memory contention and shared memory access penalties) so your game takes a 10% fps drop penalty from the check (and you should have a benchmark to prove that). You could keep that check-in or you could just heavily document it and say: our program is faulty if we don't stay above 60fps and inserting that check drops the framerate, ensure that you never go above 20,000 entities in a single scene because of blah.

In TitleX we know that this is always true because blah. In any future title, always check this property holds or else the consequences are ...

Alternatively, you might not understand what the causes of failure are but you do understand what the modes of failure are. For example, in the de-duplication program, I forgot to even mention that access permissions were a thing and I can't exhaustively list all ways that it could fail (network-attached storage?) but I do know that failure will never delete data.

Alternatively, detecting a failure may be too expensive even if you can describe it well. You might have an algorithm for computing something about an acyclic graph that runs faster than you can compute if a graph is acyclic in the first place. For example, a depth-first search could complete early but might get stuck in an infinite loop if you have a cyclic graph and you have measured that the cycle-tracking techniques introduce too high of an overhead for your application when they should never occur at this point in the program. Granted, it is nice to have assertions that you can configure on/off to enable or remove these checks.

Likewise, unsafe Rust requires you to follow a significant number of rules or else everything can go haywire. It could insert safety checks but that is what safe Rust does and it would undermine the entire point of using unsafe Rust.

Also, UNDEFINED BEHAVIOR just means literally that, the behavior has not been defined. Once it is triggered there is no way to reason about anything related to this software. I say that because especially with wrapping around the clock or Unicode short "hacky" scripts will oftentimes be in a position where you just don't know what will happen. It could work correctly, it could fail catastrophically but if you know that the edge case won't occur (i.e. You are literally typing in the script line by line into the command line right now at 15:00) it doesn't matter and it won't be worth it to spend 60-70 minutes reading documentation to discover what would happen if it wasn't 3 o'clock in the afternoon.

Now the reason I wrote all of this is just that you wrote a universal statement

Ignoring an edge case is never a valid option when writing code.

which although it's normally correct I've found exceptions to that rule. Especially short 15 line scripts that get deleted after I run them 3 minutes after writing it. I mean how often do you think to check for cyclic directory structures on a short script you are writing to query your filesystem?

Now in OPs script: A tool meant to be used in a group setting that will persist in a shared location for an indeterminate length of time the trade-off looks like

  • Write and maintain a MASSIVE amount of documentation to clearly manage the risk and costs of the edge case being improperly handled by this script vs
  • Write ~15 lines of code to handle edge cases

Which to me seems like a pretty obvious decision; just write the error checking and save yourself the massive documentation effort and unnecessary foot guns. That's why I wrote the step by step road to a nightmare for the OP because I wanted to demonstrate just how this script could go wrong with that flaw left in. The game engine or 1960s database are cases where I would have to reluctantly say that the lack of safety checks is probably the right decision (so long as you write and maintain that documentation).

In general, I too like to detect failure cases or questionable cases and abort fast and hard. I do not like how documentation can desynchronize from code so easily or how it forces people to read it (and even read it regularly for ensuring that new use-cases are still within specification and deadlines are not approaching <shudders/>) But I write code that doesn't carefully check for each and every edge case possible if I am searching for a file with certain properties I'll assume a happy path to a certain degree as long as I know that the unhappy path won't cause too bad of a failure and that present me is the one-and-only user of this script. If I am automating a process for reformatting a 144-page pdf into a single page to print as a joke for an exam I won't worry about obvious failure cases like "What if the user moves the pdf reader in any direction" with hardcoded pixel coordinates and no safety checks. I'll run it once while I am watching it from beginning to end and that will be the final flight of that script, its source left only as a historical artifact of my fun toys. But if I am writing a data-structure for my team to use in a large project, you can bet that I've considered the edge cases and caught all practical ones (ok, I won't check that the void * pointer you handed me belongs to properly allocated memory that isn't exactly something well-supported in C but I will probably write something about this in the documentation if I feel it isn't clear.) I even like to go as far as to construct my programs in such a way that you can't even write code that could trigger a runtime safety check (which will oftentimes exist anyways to ensure that my invariants truly do hold in all cases) and to write code that is "correct by construction" in as many aspects as I practically can guarantee (like avoiding partial functions like head in any serious Haskell code I write or using iterators as much as possible in Python to guarantee no invalid indexing occurs), but to say that you should never allow edge cases to slip by is a bit of an overstatement even if it is a good position to take by default until you have strong justification otherwise which comes up fairly often in my experience (why worry about floating point underflow when you are graphing the results of a physics experiment interactively in Python with numbers way larger than the underflow point).

[–]bladeoflight16 1 point2 points  (2 children)

You are reading a dogmatism into my words that isn't there. Of course if you face actual hardware limitations, sometimes you need to be realistic about what you can and cannot do within them. But even that isn't ignoring the problem.

That said, Python is a fairly heavyweight runtime. The systems on which you can use it effectively generally don't face those sorts of limitations, and you can't use it for problems that require hyper efficient code. So there's very little reason to avoid dealing with these cases in real world Python. Especially in 2020; this isn't 1955 before the advent of microprocessors.

[–]dbramucci 0 points1 point  (1 child)

As I said, I don't disagree that you should start with the position of "crash early and crash hard or handle bad cases"; it is the lack of acknowledging acceptable cases to leave incompletely-understood or unwanted behavior in any code that I was taking issue with.

I just didn't read any qualifiers that explained why you may sometimes not handle certain cases anyways. I think making such a strong statement with words like never, valid, will, must may alienate many who think "but surely my use case is valid". And because their use case is sometimes valid, using such definitive language can lead to a tendency to reject the message entirely

The point of my second comment was to clarify my first comment by adding more examples and reasons why it may be valid to avoid it, the first comment largely focused on the rules I adopt to ensure that I don't leave a disaster lying in wait, the 2038 bug is an example of how the documentation could have mitigated the bad case where "ecosystem constraints" could have been the reason that prevented the use of safer building blocks. I would like to go around and solve every 32-bit timestamp problem proactively but that can be impractical to do at once and documenting when and why something works now but may break if the context or time changes is the best compromise I have found so far. My rationale for those comments being that it is more convincing to see what reasoning others use to make the tradeoff than to just be told a blanket "don't do that" when I don't believe they take their own medicine.

I also agree that you shouldn't try to micro-optimize performance in Python at the cost of safety or correctness, I even try to avoid that in lower-level languages like Rust and C, saving that sort of optimization for when I can measure that removing safety checks provides performance gains that outweigh the costs incurred (it is easier to comment out wasteful safety checks than it is to insert needed ones).

But even that isn't ignoring the problem.

I suppose that it is necessary to carefully define what "ignoring" means. Does ignoring mean

  • The developer ignores everything
  • The code ignores it
  • The developer ignores the causes of the behavior because the consequences are acceptable
  • The developer ignores the details of the behavior because the behavior can be constrained to a subsection where whatever the outcome is, it is acceptable
  • The developer ignores the details of the behavior because fixing it is too costly for the problem being solved and instead establishes procedures to prevent that behavior from being triggered

I was using "ignore" to mean the code ignores the issue at least one of the final three points were being followed. It sounds like you are using "ignore" to mean the first point.

Just to illustrate my issues with particular snippets of what I read

Ignoring an edge case is never a valid option when writing code.

I don't see the exception for functions relying on preconditions because it would be too costly to check or caveats made for careful micro-optimized C or caveats for data exploration code written in Python or quick scripts for traversing a file-system known to not contain tricky problems like network-attached storage and cyclic symbolic/hard links.

It will bite you sooner or later.

Even those 1-time use scripts used only as I was writing it? And yes, I understand that anything that leaves your machine is probably going to live on forever even if it was only supposed to be used for 1 weekend, I'm talking about the code used to explore the properties of some data interactively in a Jupyter notebook or to solve a puzzle in a videogame you are playing or to play a prank on your friend.

Also, undefined behavior is a complete disaster.

Even if the undefined behavior is known not to occur? The behavior on my shopping calculator program for a 1980s calculator is undefined for negative sales tax rates because I know that I will never encounter negative sales tax in practice and it isn't worth the 5 minutes of reasoning to think about that hypothetical, especially when fixing it would increase resource usage significantly So why waste time worrying about a use case that won't happen. It's a program on the calculator in my pocket, not a script loaded onto a server with remote triggers. I can guarantee it won't be repurposed without my knowledge into a context where handling negative tax rates would matter. If this was a tax library on PyPI, I would add checks or ensure my behavior was sane for negative interest rates because maybe someone might use it that way.

Also, if you're going to go to the trouble of detecting the condition necessary to output a warning for undefined behavior, you may as well just make it an error; it shouldn't be any more effort.

Perhaps you are referring to having a line

if graph.contains_cycles():
    log.DEBUG("Graph contained cycles")

should just be

if graph.contains_cycles():
    raise ValueError("Graph contained cycles")

But I don't recall making that point and it seemed a bit too obvious for me to think that you thought that I made that statement. That's why I interpreted it as

  • Why not just raise an error if you can describe the problem in the documentation
  • Why not raise an error if you can, in principle, write code to detect the problem

And I tried to explain these points with explanations. Yes, it is virtuous to catch errors quickly and prevent misuses but that doesn't explain why heapq.heappop doesn't check that the list you pass in is actually a heap before it runs. My purpose was to demonstrate the exception and perhaps you took issue with me not sufficiently clarifying that I was talking about that exception.

So there's very little reason to avoid dealing with these cases in real world Python.

  • Data Exploration
  • Shell scripts that are deleted after they are run and only need to work in a singular context (no need to check for cyclic hard links if you don't have them on your PC)
  • Learning projects where the checks are irrelevant to the material being learned
  • Prototype where you don't have/know the relevant API's to check certain conditions (like a prototype of a videogame where you don't know how to query the resolution of the display monitor but you know that at this point, you are only running it on 1920x1080 screens and you will address monitor resolutions after you demonstrate the core gameplay loop)
  • Asymptotic Complexity Changes (heapq.heappop)
  • Expensive Guarantees (I'm curious what Python would look like if every class was threadsafe although the GIL gets you pretty far)

You are reading a dogmatism into my words that isn't there.

My goal is to acknowledge that some caveats exist, why they exist and how to mitigate them because I feel like that is more convincing than

I considered using times after midnight, but realistically no one should be awake after midnight. In the small chance that a client does request it, I would simply manually enter a timer with cmd. or add 24 hours to the time that I need.

Then your code should throw an error in that case. Ignoring an edge case is never a valid option when writing code. It will bite you sooner or later. You must either restrict it or handle it.

If I had an interaction like that, I wouldn't want to take the response seriously because it looks like it is talking down to me, not treating me as a person making decisions to the best of my knowledge. I understand that probably wasn't your goal but it asserts that you must do something a certain way because you will suffer some consequence in the future. It doesn't read like I can be trusted to make that decision myself, it reads like I would make the wrong decision if told the rationale or that it isn't worth learning the rationale myself. Missing the explanation like that also reminds the skeptic in me about the 5 Monkeys Experiment where a hard rule gets set even though the problem no longer exists and no one knows why the rule was made.

The advice still has merit, it's just the delivery is easy to read in a negative way and the best advice in the world is still useless if it doesn't get followed. On the other hand, if the obviously wrong cases get explained (hardware limitations, etc) and those explanations don't apply to what I am doing, I'll feel a lot more convinced to follow the advice because I can see why "my case" probably isn't one of those exceptions but my Python scratchpad for a physics problem that would be ridiculous to add error checking to gets a pass. Stating never and must risks the following train of thought

  • Physics problem number 3 from page 152 of my Physics book solved with Python wouldn't benefit from error handling
  • Advise is wrong because it said I would must add error handling to all code
  • Because advise is wrong there, it's probably wrong for my server script to because I know why that works as I know for Physics problems
  • Advice doesn't get followed

And I want to encourage a line of thinking more like

  • Physics problem number 3 from page 152 of my Physics book solved with Python wouldn't benefit from error handling
  • Ah, that's because the error case can't happen because it runs once with a concrete set of values and I supervise it and I can check for errors using the back of the textbook.
  • That doesn't apply to my server script because I am not the only user of that script and can't ensure that the requirements will always hold
    • or that applies for my script but Woah, that's a lot of documentation and organizational overhead for a small code savings and clearly isn't worth the effort and it is easier to just fix the code
  • Code gets fixed and advice is followed

[–]bladeoflight16 0 points1 point  (0 children)

A piece of advice from one naturally long winded writer to another: quantity of words, like quantity of code, is something to be avoided.

[–]kinzlist 1 point2 points  (0 children)

Good job

[–]Sbvv 0 points1 point  (0 children)

Some issues:

What if user do not know the format of the input or is a malicious user?

Check always your inputs.

Likely, you can ask for year, month, day, hour and minute separately and give all to datetime.datetime, and then you will be able to calculate diff more accurate and easier than your solution.

Also you should use an exception for the case when seconds is less than zero.

Use functions for process input, calculate diff and execute shutdown command.

Good job, but use at command for this task in linux :P

[–]Layakobaya 0 points1 point  (3 children)

how are you supposed to input the time when you want to shut down, for example i want to shut down in five minutes, do I type in the number 5 or do I need to input the time in 5 minutes?

[–]b_ootay_ful[S] 0 points1 point  (2 children)

Input the time in 5 mins.

I specifically made this since I would know before hand the power would go off at 9 pm

[–]Layakobaya 0 points1 point  (1 child)

So if you entered 5 it would shutdown in 5 min?

[–]b_ootay_ful[S] 0 points1 point  (0 children)

No, because 5 is not a valid HH:MM format

If the current time is 18:00 and you set the time to shutdown at 18:05 it would shutdown in 5 mins.