no code implementations • 20 Nov 2023 • Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi
A GPT-4-based task planner then encodes these details into a symbolic task plan.
no code implementations • 18 Oct 2023 • Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi
This technical report explores the ability of ChatGPT in recognizing emotions from text, which can be the basis of various applications like interactive chatbots, data annotation, and mental health analysis.
no code implementations • 11 Apr 2023 • Takuya Kiyokawa, Naoki Shirakura, Hiroki Katayama, Keita Tomochika, Jun Takamatsu
However, because the previous method relied on moving the object within the capturing range using a fixed-point camera, the collected image dataset was limited in terms of capturing viewpoints.
no code implementations • 2 Apr 2021 • Takuya Kiyokawa, Hiroki Katayama, Yuya Tatsuta, Jun Takamatsu, Tsukasa Ogasawara
Via experiments in an indoor experimental workplace for waste-sorting, we confirm that the proposed methods enable quick collection of the training image sets for three classes of waste items (i. e., aluminum can, glass bottle, and plastic bottle) and detection with higher performance than the methods that do not consider the differences.
no code implementations • 4 Aug 2020 • Naoki Wake, Riku Arakawa, Iori Yanokura, Takuya Kiyokawa, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi
In the context of one-shot robot teaching, the contributions of the paper are: 1) to propose a framework that 1) covers various tasks in grasp-manipulation-release class household operations and 2) mimics human postures during the operations.
Robotics Human-Computer Interaction
no code implementations • 22 Nov 2018 • Feiran Li, Gustavo Alfonso Garcia Ricardez, Jun Takamatsu, Tsukasa Ogasawara
For the left holes, we employ exemplar based multi-view inpainting method to deal with the color image and coherently use it as guidance to complete the depth correspondence.