-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathCITATION.cff
70 lines (70 loc) · 2.55 KB
/
CITATION.cff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
cff-version: 1.2.0
title: >-
BEARS Make Neuro-Symbolic Models Aware Of Their Reasoning
Shortcuts
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Emanuele
family-names: Marconato
email: emanuele.marconato@unitn.it
affiliation: University of Trento
- given-names: Samuele
family-names: Bortolotti
email: samuele.bortolotti@unitn.it
affiliation: University of Trento
- given-names: Emile
family-names: van Krieken
email: Emile.van.Krieken@ed.ac.uk
affiliation: University of Edinburgh
- given-names: Antonio
family-names: Vergari
email: avergari@ed.ac.uk
affiliation: University of Edinburgh
- given-names: Andrea
family-names: Passerini
email: andrea.passerini@unitn.it
affiliation: University of Trento
- given-names: Stefano
family-names: Teso
email: stefano.teso@unitn.it
affiliation: University of Trento
identifiers:
- type: url
value: 'https://arxiv.org/abs/2402.12240'
repository-code: 'https://github.com/samuelebortolotti/bears'
url: 'https://samuelebortolotti.github.io/bears'
abstract: >-
Neuro-Symbolic (NeSy) predictors that conform to symbolic
knowledge - encoding, e.g., safety constraints - can be
affected by Reasoning Shortcuts (RSs): They learn concepts
consistent with the symbolic knowledge by exploiting
unintended semantics. RSs compromise reliability and
generalization and, as we show in this paper, they are
linked to NeSy models being overconfident about the
predicted concepts. Unfortunately, the only trustworthy
mitigation strategy requires collecting costly dense
supervision over the concepts. Rather than attempting to
avoid RSs altogether, we propose to ensure NeSy models are
aware of the semantic ambiguity of the concepts they
learn, thus enabling their users to identify and distrust
low-quality concepts. Starting from three simple
desiderata, we derive bears (BE Aware of Reasoning
Shortcuts), an ensembling technique that calibrates the
model's concept-level confidence without compromising
prediction accuracy, thus encouraging NeSy architectures
to be uncertain about concepts affected by RSs. We show
empirically that bears improves RS-awareness of several
state-of-the-art NeSy models, and also facilitates
acquiring informative dense annotations for mitigation
purposes.
keywords:
- Neuro-Symbolic AI
- Uncertainty
- Reasoning Shortcuts
- Calibration
- Probabilistic Reasoning
- Concept-Based Models
date-released: '2024-02-19'