Have you ever modeled a system using "God status" values?
Here's what I mean by God status:
God status is a status that is knowable only by someone (e.g., God) outside the system, because that person can see the whole picture and is prescient. Contrast that with the individual components of a system, which have only local knowledge and therefore can produce limited, less accurate status values. I imagine the set of God status values would be obtained by an after-the-fact analysis of the system.
Let me give a (fictitious) example to illustrate what I mean by "create models using God status values."
Scenario: You are creating a model of a self-driving car. The model has a GPS component which sends estimated position reports every few milliseconds to a Driving Management System. In addition to sending position estimates, the GPS sends a status value. The status value is based on local information. The set of status values are: OK, lost-signal, internal-error. After analyzing the log files of a bunch of self-driving cars, you discover that occasionally hackers spoof the GPS. So, occasionally the GPS sends position estimates based on spoofed data. The fact that a position estimate was created from spoofed data is unknowable (let's assume) to the GPS. Only God knows that the position estimate was created from spoofed data. You want to design a model of self-driving cars in a way that the cars are impervious to spoofed GPS signals (i.e., the car behaves correctly even in the presence of hackers occasionally spoofing the GPS). So, you model the GPS to output 3 values: position estimate, status, and God status. The Driving Management System makes decisions based on the position estimate and the two status values.
In your modeling experience, have you ever done this kind of thing (incorporated God status into your model)? Is this a common thing to do? A rare thing to do? Is it beneficial? Is it useless?
I wonder if this is the same as modeling an attacker that can create arbitrary values and events. Those events can be labeled as generated by the attacker even though the model of the system behavior does not mention the label (and therefore cannot depend on it). This is a common strategy in security analyses. See for example the work of Eunsuk Kang's PhD thesis:
Multi-Representational Security Analysis. Eunsuk Kang, Aleksandar Milicevic, and Daniel Jackson. Symposium on the Foundations of Software Engineering (FSE), 2016.