-
Notifications
You must be signed in to change notification settings - Fork 8
Minutes 31 Aug 2023
Host: Paul Albertella
Participants: Igor Stoppa, Sebastian Hetze, Dana Vede, Daniel Krippner, Peter Brink
Agenda:
- ELISA workshop in Munich (Oct 16-18)
- Relevant safety mechanisms for Linux that we could analyse
- Next steps / execution approach for safety analysis
Sebastian - Red Hat.
- Not working on their vehicle project, but working on an automated train research project.
- Here to understand what ELISA is doing and how it may be relevant
- Would like to be able to guide / encourage other members of project to engage with ELISA and other open source communities
- Paul is planning to attend
- Dana and Daniel would have liked to attend, but it clashes with Eclipse SDV community day
-
See previous minutes for start of this discussion
-
Program flow: Something in a safety critical context that is monitoring a nominal activity
-
Needs to have a degree of independence from the thing being monitored
-
Igor: Security has a chain of trust concept that can be the basis for a claim
- We can use similar conceptual model for safety
- e.g. External hardware watchdog at the root of a chain of trust
-
Can look for both positive and negative behaviour
-
Igor: Flow control is only one aspect - more fundamental problem is memory: kernel can theoretically corrupt any processes memory
- This is not a particular problem in security, because pages are not contiguous, so cannot readily exploit this to access protected memory areas
-
Igor: In arm64 there are two page tables, for userspace and kernel. These are in some circumstances both available at the same time to a kernel process. Also we have metadata about the page tables, and about memory allocators.
-
Paul: Trustzone enables physical memory addresses to be segmented and defined as ‘safe’. Can we use this as a basis for protecting memory for safety processes?
-
Igor: Does not seem like a feasible solution - really requires more fundamental kernel design changes CGROUPS, SE-Linux, etc add a huge amount of additional complex code that is built on this potentially unsound foundation
- “Climbing the mirror” - sounds a bit slippery!
-
If we cannot have confidence in Linux as ‘safe’ - or prove that it is safe - is a provably safe monitor a possible answer? e.g. Monitor running on safety island (R5 - pair of cortex M running in lockstep).
-
Igor: Is it really possible for non-trivial processes to define criteria that can be monitored in this way in a timely / performant fashion? Would it not be simpler to try to prevent the problems happening in the first place? e.g. implement a safer memory model specifically for use by Linux. This would give a trustable basis for isolating safety processes.
-
Daniel: Perhaps Red Hat are already looking into a solution for this? We should ask Gab P.
-
Paul: This is an example of a fundamental issue that would always need to be addressed - either by changing Linux, or by showing that an external mechanism can deal with their consequences.
- We should identify and document the failure modes that arise from fundamental issues such as this
- This could to help build wider consensus about how to address them, and provide a checklist of potential safety issues that need to be addressed in any given Linux-based solution
- Memory feels like a good topic for us to focus on as a starting point