References & Citations
Computer Science > Programming Languages
Title: LTL learning on GPUs
(Submitted on 19 Feb 2024 (v1), last revised 27 Mar 2024 (this version, v2))
Abstract: Linear temporal logic (LTL) is widely used in industrial verification. LTL formulae can be learned from traces. Scaling LTL formula learning is an open problem. We implement the first GPU-based LTL learner using a novel form of enumerative program synthesis. The learner is sound and complete. Our benchmarks indicate that it handles traces at least 2048 times more numerous, and on average at least 46 times faster than existing state-of-the-art learners. This is achieved with, among others, novel branch-free LTL semantics that has $O(\log n)$ time complexity, where $n$ is trace length, while previous implementations are $O(n^2)$ or worse (assuming bitwise boolean operations and shifts by powers of 2 have unit costs -- a realistic assumption on modern processors).
Submission history
From: Martin Berger [view email][v1] Mon, 19 Feb 2024 18:58:26 GMT (7479kb,D)
[v2] Wed, 27 Mar 2024 20:00:00 GMT (7478kb,D)
Link back to: arXiv, form interface, contact.