We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.PL

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Programming Languages

Title: LTL learning on GPUs

Abstract: Linear temporal logic (LTL) is widely used in industrial verification. LTL formulae can be learned from traces. Scaling LTL formula learning is an open problem. We implement the first GPU-based LTL learner using a novel form of enumerative program synthesis. The learner is sound and complete. Our benchmarks indicate that it handles traces at least 2048 times more numerous, and on average at least 46 times faster than existing state-of-the-art learners. This is achieved with, among others, novel branch-free LTL semantics that has $O(\log n)$ time complexity, where $n$ is trace length, while previous implementations are $O(n^2)$ or worse (assuming bitwise boolean operations and shifts by powers of 2 have unit costs -- a realistic assumption on modern processors).
Comments: 27 pages
Subjects: Programming Languages (cs.PL); Artificial Intelligence (cs.AI)
MSC classes: 68
ACM classes: D.3
Cite as: arXiv:2402.12373 [cs.PL]
  (or arXiv:2402.12373v2 [cs.PL] for this version)

Submission history

From: Martin Berger [view email]
[v1] Mon, 19 Feb 2024 18:58:26 GMT (7479kb,D)
[v2] Wed, 27 Mar 2024 20:00:00 GMT (7478kb,D)

Link back to: arXiv, form interface, contact.