The problem is to allow 5 numbers to be input and determine if any of them were larger than 2000. What is the difference in effect of these 3 segments of code?

## this one is NOT good!  it is bad!
big_number_flag = False
i = 0
while i < 5:
	n = int(input("Enter a number"))
	i=i+1
	if n > 2000:
		big_number_flag = True
	else:
  		big_number_flag = False

# after the loop
if big_number_flag:
      print("saw at least one big number")
else:
      print("didn't see any big numbers")


After the loop is finished, all you can say about the big_number_flag is that it is True or False, depending on ONLY the LAST value input, not all 5 values. It is very tempting to write flag code like this if you are not thinking.

#----------------------------------------------
# this one is bad too!
big_number_flag = True  ## wrong initialization!
i = 0
while i < 5:
	n = int (input("Enter a number"))
	i=i+1
	if n > 2000:
		big_number_flag = True

# after the loop
if big_number_flag:
      print("saw at least one big number")
else:
      print("didn't see any big numbers")

In this case, it is possible to say after the loop that the value of the big_number_flag WILL be True. BUT that means nothing because it may be True because of the initialization, OR because it was set (reset) to True inside the loop. Note a very obvious bug here - NOwhere is the flag set to False. If there is no way for it to be set to anything but True, it is useless.

#----------------------------------------------
# this one is MUCH better
big_number_flag = False
i = 0
while i < 5:
	n = int (input("Enter a number"))
	i=i+1
	if n > 2000:
		big_number_flag = True

After the loop, if the big_number_flag is True, SOME value was seen that was more than 2000 - could have been any one (or more than one) of the 5 values but that's ok. If the big_number_flag is False, you can be sure that no number was seen over 2000. This code is the usual way to write the logic for a flag.

To summarize, you should be able to write a sentence about any variable that you use as a flag. "error_seen will be True if any error in inputs has been seen (where error means ...) and it will be False if no errors have been seen." This should be an "invariant", meaning that this statement should be True throughout the program. This means you have to think about "what do I initialize it to at the start?" Well, I haven't seen any errors if I haven't seen any input, so it's going to start as False. This means you test the flag when it is meaningful to do so. You usually don't test it during the main processing, you test it afterwards. You set it during the processing as needed.