Skip to content

Commit

Permalink
Added PrismQ&A
Browse files Browse the repository at this point in the history
  • Loading branch information
mynkgoel committed Sep 27, 2024
1 parent 608912f commit 4fe868c
Show file tree
Hide file tree
Showing 2 changed files with 34 additions and 0 deletions.
34 changes: 34 additions & 0 deletions _publications/prism_qa.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
abstract: "Voice assistants capable of answering user queries during various physical tasks have shown promise in guiding users through complex procedures. However, users often find it challenging to articulate their queries precisely, especially when unfamiliar with the specific terminologies required for machine-oriented tasks. We introduce PrISM-Q&A, a novel question-answering (QA) interaction termed step-aware QA, which enhances the functionality of voice assistants on smartwatches by incorporating Human Activity Recognition (HAR). Specifically, it continuously monitors user behavior during procedural tasks via audio and motion sensors on the smartwatch and estimates which step the user is at. When a question is posed, this information is supplied to Large Language Models (LLMs) as context, which generate responses, even to inherently vague questions like “What should I do next with this?”. Our studies confirmed users’ preference for our approach for convenience compared to existing voice assistants and demonstrated the technical feasibility evaluated with newly collected QA datasets in cooking, latte-making, and skin care tasks. Our real-time system represents the first integration of LLMs with real-world sensor data to provide situated assistance during tasks without camera use. Our code and datasets will facilitate more research in this emerging domain. "
authors:
- arakawa
- Jill Lehman
- goel
bibtex: '@inproceedings{Arakawa2024,
title={PrISM-Q&A: Step-Aware Question Answering with Large Language Models Enabled by Multimodal Procedure Tracking using a Smartwatch},
author={Riku Arakawa, Jill Lehman, Mayank
Goel},
booktitle={Proceedings of the ACM on Interactive, Mobile, Wearable, and Ubiquitous
Technologies (IMWUT)},
year={2024}
}'
blurb: Step-Aware Question Answering using LLMs Enabled by Multimodal Action and Procedure Tracking using a Smartwatch}
citation: 'Riku Arakawa, Jill Lehman, Mayank Goel.
2024. PrISM-Q&A: IStep-Aware Question Answering with Large Language Models Enabled by Multimodal Procedure Tracking using a Smartwatch. Proceedings of the ACM on Interactive, Mobile, Wearable, and Ubiquitous Technologies (IMWUT),'
conference: Proceedings of the ACM on Interactive, Mobile, Wearable, and Ubiquitous
Technologies (IMWUT)
date: '2025-1-1'
image: /images/pubs/PrismQA.png
name: PrISM-Q&A
onhomepage: true
pdf: /pdfs/prism_qa.pdf
thumbnail: /images/pubs/PrismQA.png
title: 'PrISM-Q&A: IStep-Aware Question Answering with Large Language Models Enabled by Multimodal Procedure Tracking using a Smartwatch.'
year: '2024'
category: activity,interaction
---
Binary file added images/pubs/PrismQA.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 4fe868c

Please sign in to comment.