>>999181
If it's possible for the operator to do something outside of the specification, the system is insecure. Outside of that, it's not the system's job to prevent you from doing something not in the spec. So the claim that the OS is unhackable doesn't mean that the user is unphishable. There are lots of meaning of security, but the relevant one for software implementation is specification compliance.
>>999188
// define concept of partially sorted array
property sorted(int[] arr, int n)
where all { i in 1..n | arr[i-1] <= arr[i] }
// define type of sorted array
type sorted_ints is (int[] arr) where sorted(arr,|arr|)
// The implementation of sort
function sort(int[] items) -> (sorted_ints result):
//
bool clean
//
do:
// reset clean flag
clean = true
int i = 1
// look for unsorted pairs
while i < |items|
where i >= 1
where clean ==> sorted(items,i):
if items[i-1] > items[i]:
// found unsorted pair
clean = false
int tmp = items[i-1]
items[i-1] = items[i]
items[i] = tmp
i = i + 1
while !clean
where clean ==> sorted(items,|items|)
// Done
return items
Looks like the types are a lot smaller than the implementation.
>>999192
If the hardware doesn't match the model, the hardware is bugged and no software patch is going to really fix it, just work around it. If your computer broadcasts its ram to the internet when asked, there is nothing that can be done from the software side to fix that, it's a hardware bug outside of the scope of the OS.
The whole point of having a specification is that it defines which side of an interface the fault is on. If the inputs to a function are invalid, it's not that function's fault, if the output doesn't match the model then it's either a bug in the implementation or (much less likely) a bug in the model. When hardware companies start using formal models for x86(which they do now, and have for parts of the system for years), you get end to end verification for a system, which the DARPA guys want to make drones that are secure. The software people are slowly catching up with the standard practice in the hardware industry, which is formal verification.
>>999193
The people who write your OS can't control if you decide to run it on a processor which doesn't meet the specifications required for their proof, but their proof obviously doesn't apply if the processor doesn't meet the requirements of the model. I don't know why this needs to be spelled out, formal verification is a proof that says "If you run this program on hardware that meets this specification, the program will have these properties." If you run the program on a computer that doesn't meet the specification, then it's not the OS being hacked but the hardware below it.
>>999189
"unhackable" being the media's way of saying that the system has no livelocks, deadlocks, all syscalls run in bounded time, processes are kept separate, resources are only accessed in a valid way. If the hardware it's running on is compliant, it is not possible to execute a program that will violate the specification, thus it is "secure" for the definition of secure provided by the specification. For instance, it's possible to write an OS which guarantees process seperation, but doesn't have bounded time syscalls. Such a system is "secure" but not secure against DoS attacks, which makes sense because it was never required to be secure against that threat. The specification for security needs a definition of the threat model you wish to protect against, and if that threat model is too small, you can still be fucked. What can not happen on correct hardware(which can still happen with e.g. linux) is that the system claims to provide processes, but actually the processes leak.