Please use this identifier to cite or link to this item:
https://rda.sliit.lk/handle/123456789/782
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Pulasinghe, K | - |
dc.contributor.author | Watanabe, K | - |
dc.contributor.author | Izumi, K | - |
dc.contributor.author | Kiguchi, K | - |
dc.date.accessioned | 2022-01-26T06:53:49Z | - |
dc.date.available | 2022-01-26T06:53:49Z | - |
dc.date.issued | 2004-01-30 | - |
dc.identifier.citation | K. Pulasinghe, K. Watanabe, K. Izumi and K. Kiguchi, "Modular fuzzy-neuro controller driven by spoken language commands," in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 34, no. 1, pp. 293-302, Feb. 2004, doi: 10.1109/TSMCB.2003.811511. | en_US |
dc.identifier.issn | 1083-4419 | - |
dc.identifier.uri | http://localhost:80/handle/123456789/782 | - |
dc.description.abstract | We present a methodology of controlling machines using spoken language commands. The two major problems relating to the speech interfaces for machines, namely, the interpretation of words with fuzzy implications and the out-of-vocabulary (OOV) words in natural conversation, are investigated. The system proposed in this paper is designed to overcome the above two problems in controlling machines using spoken language commands. The present system consists of a hidden Markov model (HMM) based automatic speech recognizer (ASR), with a keyword spotting system to capture the machine sensitive words from the running utterances and a fuzzy-neural network (FNN) based controller to represent the words with fuzzy implications in spoken language commands. Significance of the words, i.e., the contextual meaning of the words according to the machine's current state, is introduced to the system to obtain more realistic output equivalent to users' desire. Modularity of the system is also considered to provide a generalization of the methodology for systems having heterogeneous functions without diminishing the performance of the system. The proposed system is experimentally tested by navigating a mobile robot in real time using spoken language commands. | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE | en_US |
dc.relation.ispartofseries | IEEE transactions on systems, man, and cybernetics, part B (cybernetics);Vol 34 Issue 1 Pages 293-302 | - |
dc.subject | Modular Fuzzy-Neuro | en_US |
dc.subject | Controller Driven | en_US |
dc.subject | Spoken Language Commands | en_US |
dc.title | Modular fuzzy-neuro controller driven by spoken language commands | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/TSMCB.2003.811511 | en_US |
Appears in Collections: | Research Papers - IEEE Research Papers - SLIIT Staff Publications Research Publications -Dept of Information Technology |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Modular_fuzzy-neuro_controller_driven_by_spoken_language_commands.pdf Until 2050-12-31 | 538.68 kB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.